Anoop, if you wish to unsubscribe from the ActiveMQ users list use the
email address listed on the website [1] (i.e.
users-unsubscr...@activemq.apache.org).


Justin

[1] https://activemq.apache.org/contact

On Fri, Jan 19, 2024 at 11:06 AM Anoop Sikhwal
<anoop.sikh...@infodesk.com.invalid> wrote:

> Unsubscribe
>
> ------------------------------
> *From:* John Lilley <john.lil...@redpointglobal.com.INVALID>
> *Sent:* Friday, January 19, 2024 10:24:39 PM
> *To:* users@activemq.apache.org <users@activemq.apache.org>
> *Subject:* RE: Trouble with Replication HA Master/Slave config performing
> failback
>
>
> Hi Paul,
>
>
>
> Thanks for all of this!  One thing you said struck me as odd: “You will
> need a separate PVC for both your Primary and standby broker instances.”
>
>
>
> By “separate PVC” do you mean completely different volumes?  Or a shared
> volume with a mount in each pod?
>
>
>
> Does this mean that the primary and standby do not communicate anything
> via the lock file?  Perhaps I am confused by the name “lock file” which
> implies some kind of concurrency management.
>
>
>
> Instead of two separate PVC volumes, would it be OK to use a shared PVC
> (in Azure, this is an NFS-mounted FileShare volume) but set the primary and
> backup to use different directories within the volume?  To save cost and
> complexity.
>
>
>
> john
>
>
>
>
> [image: rg] <https://www.redpointglobal.com/>
>
> John Lilley
>
> Data Management Chief Architect, Redpoint Global Inc.
>
> 34 Washington Street, Suite 205 Wellesley Hills, MA 02481
>
> *M: *+1 7209385761 <+1%207209385761> | john.lil...@redpointglobal.com
>
> *From:* Shields, Paul Michael <paul.shie...@hpe.com>
> *Sent:* Friday, January 19, 2024 9:48 AM
> *To:* users@activemq.apache.org
> *Subject:* Re: Trouble with Replication HA Master/Slave config performing
> failback
>
>
>
> **** [Caution] This email is from an external source. Please use caution
> responding, opening attachments or clicking embedded links. ****
>
>
>
> Hi John,
>
>
>
> I am using the activemq-artemis-operator from ArtemisCloud.io
> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fArtemisCloud.io&c=E,1,TV8NB-Ny1beQqMowEfKkewBSKybapQAV_KnDp6P5OgtMCv0-n88PFErhGYbfjzDTXTShqF8_nA7PcwZSbhhhq9T1SeZtmTI-yE7OpN_U2KevLKqACVtBmbc,&typo=1&ancr_add=1>
> That operator only creates the PVC when I specify persistenceEnabled=true.
> So that is how I create a PVC for each broker instance.  I then have to
> customize the broker.xml file to only have the
> <node-manager-lock-directory> on the PVC and the othe Artemis data
> directories on the local pod filesystem.
>
>
>
> If you are deploying directly to k8s without an operator then you could
> have persistenceEnabled=false, but would have to define a PVC that would
> mount the <node-manager-lock-directory> to the broker pod to store the
> server.lock file.  You will need a separate PVC for both your Primary and
> standby broker instances.
>
>
>
> Yes, I have attached the broker.xml file to this email.  Note that the
> only difference between my primary and standby instance broker.xml files is
> the <ha-policy> definition.  So, I will attach the whole broker.xml from my
> Primary and just post the standby <ha-policy> portion.
>
>
>
> Show the PVC mount:
>
> *ncn-m001:~ #* kubectl -n dvs exec -it cray-dvs-mqtt-ss-0 -- /bin/bash
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ df
>
> Filesystem           1K-blocks      Used Available Use% Mounted on
>
> containerd_overlayfs 676576488 219342084 457234404  33% /
>
> tmpfs                    65536         0     65536   0% /dev
>
> tmpfs                131773112         0 131773112   0% /sys/fs/cgroup
>
> containerd_overlayfs 676576488 219342084 457234404  33% /etc/hostname
>
> shm                      65536         0     65536   0% /dev/shm
>
> /dev/sdc3            187445840    222380 187223460   1% /etc/hosts
>
> /dev/rbd2              1992552     34876   1941292   2%
> /opt/cray-dvs-mqtt/data   ß- this is the PVC mount
>
> tmpfs                263443828         4 263443824   1%
> /amq/extra/secrets/cray-dvs-mqtt-props
>
> tmpfs                263443828        12 263443816   1% /run/secrets/
> //kubernetes.io/serviceaccount
> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2f%2f%2fkubernetes.io%2fserviceaccount&c=E,1,zx-JtrSyBrNbA01-NTG91tZA5WOC-kl6BPtfG3RTuIQykLDhNJnVTq3pNbooNd06f7PobYG4wYRoE_WMe607UGKQe_yDrIGm_N6BvubSj6O0VVM,&typo=1&ancr_add=1>
>
> tmpfs                131773112         0 131773112   0% /proc/acpi
>
> tmpfs                131773112         0 131773112   0% /proc/scsi
>
> tmpfs                131773112         0 131773112   0% /sys/firmware
>
>
>
> Show the lock file on the PVC
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ od -x
> /opt/cray-dvs-mqtt/data/journal/server.lock
>
> 0000000 3050 fa30 65aa b57a 117b 9eee b25c bf25
>
> 0000020 7183 0096
>
> 0000023
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ od -x
> /opt/cray-dvs-mqtt/data/journal/serverlock.1
>
> 0000000
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ od -x
> /opt/cray-dvs-mqtt/data/journal/serverlock.2
>
> 0000000
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ exit
>
> exit
>
> *ncn-m001:~ #* kubectl -n dvs exec -it cray-dvs-mqtt-ss-1 -- /bin/bash
>
> [jboss@cray-dvs-mqtt-ss-1 ~]$ ls -l /opt/cray-dvs-mqtt/data/journal/
>
> total 4
>
> -rw-r--r-- 1 jboss root 19 Jan 17 22:26 server.lock
>
> -rw-r--r-- 1 jboss root  0 Jan 17 22:25 serverlock.1
>
> -rw-r--r-- 1 jboss root  0 Jan 17 22:25 serverlock.2
>
> [jboss@cray-dvs-mqtt-ss-1 ~]$ od -x
> /opt/cray-dvs-mqtt/data/journal/server.lock
>
> 0000000 3030 fa30 65aa b57a 117b 9eee b25c bf25
>
> 0000020 7183 0096
>
> 0000023
>
> [jboss@cray-dvs-mqtt-ss-1 ~]$
>
>
>
> Primary instance broker.xml file:  Note I have logging enable. You
> probably don’t want that in a production environment.
>
>
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$ cat amq-broker/etc/broker.xml
>
> <?xml version='1.0'?>
>
> <!--
>
> Licensed to the Apache Software Foundation (ASF) under one
>
> or more contributor license agreements.  See the NOTICE file
>
> distributed with this work for additional information
>
> regarding copyright ownership.  The ASF licenses this file
>
> to you under the Apache License, Version 2.0 (the
>
> "License"); you may not use this file except in compliance
>
> with the License.  You may obtain a copy of the License at
>
>
>
>   http://www.apache.org/licenses/LICENSE-2.0
>
>
>
> Unless required by applicable law or agreed to in writing,
>
> software distributed under the License is distributed on an
>
> "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
>
> KIND, either express or implied.  See the License for the
>
> specific language governing permissions and limitations
>
> under the License.
>
> -->
>
>
>
> <configuration xmlns="urn:activemq"
>
>                xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
>
>                xmlns:xi=http://www.w3.org/2001/XInclude
>
>                xsi:schemaLocation="urn:activemq
> /schema/artemis-configuration.xsd">
>
>
>
>    <core xmlns="urn:activemq:core" xmlns:xsi=
> http://www.w3.org/2001/XMLSchema-instance
>
>          xsi:schemaLocation="urn:activemq:core ">
>
>
>
>       <name>amq-broker</name>
>
>
>
>       <ha-policy>
>
>         <replication>
>
>           <master>
>
>             <check-for-live-server>true</check-for-live-server>
>
>           </master>
>
>         </replication>
>
>       </ha-policy>
>
>
>
>
>
>
>
>       <persistence-enabled>true</persistence-enabled>
>
>
>
>       <!-- this could be ASYNCIO, MAPPED, NIO
>
>            ASYNCIO: Linux Libaio
>
>            MAPPED: mmap files
>
>            NIO: Plain Java Files
>
>        -->
>
>       <journal-type>NIO</journal-type>
>
>
>
>
> <node-manager-lock-directory>/opt/cray-dvs-mqtt/data/journal</node-manager-lock-directory>
>
>
>
>       <paging-directory>data/paging</paging-directory>
>
>
>
>       <bindings-directory>data/bindings</bindings-directory>
>
>
>
>       <journal-directory>data/journal</journal-directory>
>
>
>
>
> <large-messages-directory>data/large-messages</large-messages-directory>
>
>
>
>
>
>       <!-- if you want to retain your journal uncomment this following
> configuration.
>
>
>
>       This will allow your system to keep 7 days of your data, up to 10G.
> Tweak it accordingly to your use case and capacity.
>
>
>
>       it is recommended to use a separate storage unit from the journal
> for performance considerations.
>
>
>
>       <journal-retention-directory period="7" unit="DAYS"
> storage-limit="10G">data/retention</journal-retention-directory>
>
>
>
>       You can also enable retention by using the argument
> journal-retention on the `artemis create` command -->
>
>
>
>
>
>
>
>       <journal-datasync>true</journal-datasync>
>
>
>
>       <journal-min-files>2</journal-min-files>
>
>
>
>       <journal-pool-files>10</journal-pool-files>
>
>
>
>       <journal-device-block-size>4096</journal-device-block-size>
>
>
>
>       <journal-file-size>10M</journal-file-size>
>
>             <!--
>
>         You can verify the network health of a particular NIC by
> specifying the <network-check-NIC> element.
>
>          <network-check-NIC>theNicName</network-check-NIC>
>
>         -->
>
>
>
>       <!--
>
>         Use this to use an HTTP server to validate the network
>
>          <network-check-URL-list>
> http://www.apache.org</network-check-URL-list> -->
>
>
>
>       <!-- <network-check-period>10000</network-check-period> -->
>
>       <!-- <network-check-timeout>1000</network-check-timeout> -->
>
>
>
>       <!-- this is a comma separated list, no spaces, just DNS or IPs
>
>            it should accept IPV6
>
>
>
>            Warning: Make sure you understand your network topology as this
> is meant to validate if your network is valid.
>
>                     Using IPs that could eventually disappear or be
> partially visible may defeat the purpose.
>
>                     You can use a list of multiple IPs, and if any
> successful ping will make the server OK to continue running -->
>
>       <!-- <network-check-list>10.0.0.1</network-check-list> -->
>
>
>
>       <!-- use this to customize the ping used for ipv4 addresses -->
>
>       <!-- <network-check-ping-command>ping -c 1 -t %d
> %s</network-check-ping-command> -->
>
>
>
>       <!-- use this to customize the ping used for ipv6 addresses -->
>
>       <!-- <network-check-ping6-command>ping6 -c 1
> %2$s</network-check-ping6-command> -->
>
>
>
>
>
>
>
>     <connectors>
>
>         <!-- Connector used to be announced through cluster connections
> and notifications -->
>
>         <connector
> name="artemis">tcp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:61616</connector>
>
>     </connectors>
>
>
>
>
>
>
>
>       <!-- how often we are looking for how many bytes are being used on
> the disk in ms -->
>
>       <disk-scan-period>5000</disk-scan-period>
>
>
>
>       <!-- once the disk hits this limit the system will block, or close
> the connection in certain protocols
>
>            that won't support flow control. -->
>
>       <max-disk-usage>90</max-disk-usage>
>
>
>
>       <!-- should the broker detect dead locks and other issues -->
>
>       <critical-analyzer>true</critical-analyzer>
>
>
>
>       <critical-analyzer-timeout>120000</critical-analyzer-timeout>
>
>
>
>
> <critical-analyzer-check-period>60000</critical-analyzer-check-period>
>
>
>
>       <critical-analyzer-policy>HALT</critical-analyzer-policy>
>
>
>
>
>
>
>
>       <!-- the system will enter into page mode once you hit this limit.
> This is an estimate in bytes of how much the messages are using in memory
>
>
>
>       The system will use half of the available memory (-Xmx) by default
> for the global-max-size.
>
>       You may specify a different value here if you need to customize it
> to your needs.
>
>
>
>       <global-max-size>100Mb</global-max-size> -->
>
>
>
>       <!-- the maximum number of messages accepted before entering full
> address mode.
>
>            if global-max-size is specified the full address mode will be
> specified by whatever hits it first. -->
>
>       <global-max-messages>-1</global-max-messages>
>
>
>
>       <acceptors>
>
>
>
>          <!-- useEpoll means: it will use Netty epoll if you are on a
> system (Linux) that supports it -->
>
>          <!-- amqpCredits: The number of credits sent to AMQP producers -->
>
>          <!-- amqpLowCredits: The server will send the # credits specified
> at amqpCredits at this low mark -->
>
>          <!-- amqpDuplicateDetection: If you are not using duplicate
> detection, set this to false
>
>                                       as duplicate detection requires
> applicationProperties to be parsed on the server. -->
>
>          <!-- amqpMinLargeMessageSize: Determines how many bytes are
> considered large, so we start using files to hold their data.
>
>                                        default: 102400, -1 would mean to
> disable large mesasge control -->
>
>
>
>          <!-- Note: If an acceptor needs to be compatible with HornetQ
> and/or Artemis 1.x clients add
>
>                     "anycastPrefix=jms.queue.;multicastPrefix=jms.topic."
> to the acceptor url.
>
>                     See https://issues.apache.org/jira/browse/ARTEMIS-1644
> for more information. -->
>
>
>
>
>
>          <!-- Acceptor for every supported protocol -->
>
>
>
> <acceptor
> name="dvs">tcp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:1883?protocols=MQTT;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;useEpoll=true;amqpCredits=1000;amqpMinCredits=300</acceptor><acceptor
> name="scaleDown">tcp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:61616?protocols=CORE;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;useEpoll=true;amqpCredits=1000;amqpMinCredits=300</acceptor>
>
>       </acceptors>
>
>
>
>
>
>       <cluster-user>EtBU3LYU</cluster-user>
>
>
>
>       <cluster-password>xOKqDzwH</cluster-password>
>
>
>
>       <broadcast-groups>
>
> <broadcast-group name="my-broadcast-group">
> <jgroups-file>jgroups-ping.xml</jgroups-file>
> <jgroups-channel>activemq_broadcast_channel</jgroups-channel>
> <connector-ref>artemis</connector-ref>       </broadcast-group>
>
>       </broadcast-groups>
>
>
>
>       <discovery-groups>
>
> <discovery-group name="my-discovery-group">
> <jgroups-file>jgroups-ping.xml</jgroups-file>
> <jgroups-channel>activemq_broadcast_channel</jgroups-channel>
> <refresh-timeout>10000</refresh-timeout>       </discovery-group>
>
>       </discovery-groups>
>
>
>
>       <cluster-connections>
>
> <cluster-connection name="my-cluster">
> <connector-ref>artemis</connector-ref>
> <retry-interval>1000</retry-interval>
> <retry-interval-multiplier>2</retry-interval-multiplier>
> <max-retry-interval>32000</max-retry-interval>
> <initial-connect-attempts>20</initial-connect-attempts>
> <reconnect-attempts>-1</reconnect-attempts>
> <use-duplicate-detection>true</use-duplicate-detection>
> <message-load-balancing>ON_DEMAND</message-load-balancing>
> <max-hops>1</max-hops>          <discovery-group-ref
> discovery-group-name="my-discovery-group"/>       </cluster-connection>
>
>       </cluster-connections>
>
>
>
>
>
>       <security-settings>
>
>          <security-setting match="#">
>
>             <permission type="createNonDurableQueue" roles="admin"/>
>
>             <permission type="deleteNonDurableQueue" roles="admin"/>
>
>             <permission type="createDurableQueue" roles="admin"/>
>
>             <permission type="deleteDurableQueue" roles="admin"/>
>
>             <permission type="createAddress" roles="admin"/>
>
>             <permission type="deleteAddress" roles="admin"/>
>
>             <permission type="consume" roles="admin"/>
>
>             <permission type="browse" roles="admin"/>
>
>             <permission type="send" roles="admin"/>
>
>             <!-- we need this otherwise ./artemis data imp wouldn't work
> -->
>
>             <permission type="manage" roles="admin"/>
>
>          </security-setting>
>
>       </security-settings>
>
>
>
>       <address-settings>
>
>          <!-- if you define auto-create on certain queues, management has
> to be auto-create -->
>
>          <address-setting match="//activemq.management#">
> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2f%2f%2factivemq.management%23%26quot%3b%26gt%3b&c=E,1,BRroUY8kfUeX80gvrp7t72VVHVGMPASjpLV9yYkz4JVee8GpRfZvI_Kw2UOOUNV2uD0jK5gOPSY2kMhPWHhNzPMAHND9p06dXyXm9NI42fIRLB9HOfs,&typo=1&ancr_add=1>
>
>             <dead-letter-address>DLQ</dead-letter-address>
>
>             <expiry-address>ExpiryQueue</expiry-address>
>
>             <redelivery-delay>0</redelivery-delay>
>
>             <!-- with -1 only the global-max-size is in use for limiting
> -->
>
>             <max-size-bytes>-1</max-size-bytes>
>
>
> <message-counter-history-day-limit>10</message-counter-history-day-limit>
>
>             <address-full-policy>PAGE</address-full-policy>
>
>             <auto-create-queues>true</auto-create-queues>
>
>             <auto-create-addresses>true</auto-create-addresses>
>
>          </address-setting>
>
>          <!--default for catch all-->
>
>          <address-setting match="#">
>
>             <redistribution-delay>0</redistribution-delay>
>
>             <dead-letter-address>DLQ</dead-letter-address>
>
>             <expiry-address>ExpiryQueue</expiry-address>
>
>             <redelivery-delay>0</redelivery-delay>
>
>
>
>             <!-- if max-size-bytes and max-size-messages were both
> enabled, the system will enter into paging
>
>                  based on the first attribute to hits the maximum value -->
>
>             <!-- limit for the address in bytes, -1 means unlimited -->
>
>             <max-size-bytes>-1</max-size-bytes>
>
>
>
>             <!-- limit for the address in messages, -1 means unlimited -->
>
>             <max-size-messages>-1</max-size-messages>
>
>
>
>             <!-- the size of each file on paging. Notice we keep files in
> memory while they are in use.
>
>                  Lower this setting if you have too many queues in memory.
> -->
>
>             <page-size-bytes>10M</page-size-bytes>
>
>
>
>             <!-- limit how many messages are read from paging into the
> Queue. -->
>
>             <max-read-page-messages>-1</max-read-page-messages>
>
>
>
>             <!-- limit how much memory is read from paging into the Queue.
> -->
>
>             <max-read-page-bytes>20M</max-read-page-bytes>
>
>
>
>
> <message-counter-history-day-limit>10</message-counter-history-day-limit>
>
>             <address-full-policy>PAGE</address-full-policy>
>
>             <auto-create-queues>true</auto-create-queues>
>
>             <auto-create-addresses>true</auto-create-addresses>
>
>             <auto-delete-queues>false</auto-delete-queues>
>
>             <auto-delete-addresses>false</auto-delete-addresses>
>
>          </address-setting>
>
>       </address-settings>
>
>
>
>       <addresses>
>
>          <address name="DLQ">
>
>             <anycast>
>
>                <queue name="DLQ" />
>
>             </anycast>
>
>          </address>
>
>          <address name="ExpiryQueue">
>
>             <anycast>
>
>                <queue name="ExpiryQueue" />
>
>             </anycast>
>
>          </address>
>
>
>
>       </addresses>
>
>
>
>
>
>
>
>           <broker-plugins>
>
>              <broker-plugin
> class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
>
>                 <property key="LOG_ALL_EVENTS" value="true"/>
>
>                 <property key="LOG_CONNECTION_EVENTS" value="true"/>
>
>                 <property key="LOG_SESSION_EVENTS" value="true"/>
>
>                 <property key="LOG_CONSUMER_EVENTS" value="true"/>
>
>                 <property key="LOG_DELIVERING_EVENTS" value="true"/>
>
>                 <property key="LOG_SENDING_EVENTS" value="true"/>
>
>                 <property key="LOG_INTERNAL_EVENTS" value="true"/>
>
>              </broker-plugin>
>
>           </broker-plugins>
>
>
>
>       <metrics> <plugin
> class-name="com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin"/>
> </metrics>
>
>    </core>
>
> </configuration>
>
> [jboss@cray-dvs-mqtt-ss-0 ~]$
>
>
>
> Standby instance broker.xml <ha-policy> stanza
>
>
>
>       <ha-policy>
>
>         <replication>
>
>           <slave>
>
>             <allow-failback>true</allow-failback>
>
>           </slave>
>
>         </replication>
>
>       </ha-policy>
>
>
>
>
>
> *From: *John Lilley <john.lil...@redpointglobal.com.INVALID>
> *Date: *Friday, January 19, 2024 at 10:21 AM
> *To: *users@activemq.apache.org <users@activemq.apache.org>
> *Cc: *Lino Pereira <lino.pere...@redpointglobal.com>
> *Subject: *RE: Trouble with Replication HA Master/Slave config performing
> failback
>
> Hi Paul,
>
>
>
> Thanks for the information!  It sounds like you have experience with AMQ
> in K8S.  Can you share your two broker.xml files with us?
>
>
>
> Lino:  do we currently set persistenceEnable=false?  I would expect
> there’s no reason for setting it true, since the pod-local storage will
> evaporate with the pod.
>
>
>
> Thanks
>
> John
>
>
>
>
>
>
>
> [image: rg]
> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.redpointglobal.com%2f&c=E,1,_MEdc5Qy-vTEnjozH--yv7M1N-lLpEcvjRVTe0TZfe8G_dcxUJzpNCwemRvf2WA7KwCsV-QQ_ejGemmADMP_e4L0WvHHBpjTniOeKgAhT3d5l10,&typo=1>
>
> *John Lilley *
>
> *Data Management Chief Architect, Redpoint Global Inc. *
>
> 34 Washington Street, Suite 205 Wellesley Hills, MA 02481
>
> *M: *+1 7209385761 <+1%207209385761> | john.lil...@redpointglobal.com
>
> *From:* Shields, Paul Michael <paul.shie...@hpe.com>
> *Sent:* Friday, January 19, 2024 8:51 AM
> *To:* users@activemq.apache.org
> *Subject:* Re: Trouble with Replication HA Master/Slave config performing
> failback
>
>
>
> **** [Caution] This email is from an external source. Please use caution
> responding, opening attachments or clicking embedded links. ****
>
>
>
> Hi John,
>
>
>
> The <node-manager-lock-directory>. Needs to be specified in both the
> master and slave broker.xml files.
>
>
>
> the nodeUUID which is contained in the server lock file was changing
> between active broker startups. The server.lock file is stored in the
> /home/jboss/amq-broker/data/journal directory in the pod filesystem. Our
> current configuration of the broker has persistenceEnable=false which
> causes any data/broker state to be removed when the broker pod is
> destroyed. Thus a new nodeUUID was being generated on each Primary broker
> startup. When the Primary broker instance starts up it looks for an
> existing server.lock file. When it does not find one it generates a new
> nodeUUID and places it in the server.lock file, otherwise it uses the
> nodeUUID contained within the existing server.lock file. The Primary broker
> then communicates this nodeUUID through the cluster connection to the
> Standby broker instance on startup. The Standby broker instance uses the
> nodeUUID to determine if the Primary broker instance is part of its HA
> pair. The Standby broker instance saved off the original nodeUUID when the
> HA cluster was first started and since the recently communicated nodeUUID
> does not match the nodeUUID that it previously stored the fail back is not
> performed. Once persistence was enabled the Standby broker instance
> performed the fallback to the Primary instance and was no longer in the
> "active" state
>
>
>
> This is what I have observed in our Kubernetes environment.  Justin please
> make any corrections to my description of the nodeUUID use.
>
>
>
> Paul
>
> *From: *John Lilley <john.lil...@redpointglobal.com.INVALID>
> *Date: *Thursday, January 18, 2024 at 5:49 PM
> *To: *users@activemq.apache.org <users@activemq.apache.org>
> *Subject: *RE: Trouble with Replication HA Master/Slave config performing
> failback
>
> Hi Justin,
>
>
>
> Can you elaborate on the lock file in the <node-manager-lock-directory>?
>
> Does it need to be specified in the broker.xml for both the master and the
> slave?
>
> Does it actually get used as a “lock” in the concurrency sense, to
> coordinate the live/backup election?
>
>
>
> Thanks
>
> john
>
>
>
>
>
> [image: rg]
> <https://linkprotect.cudasvc.com/url?a=https://www.redpointglobal.com/&c=E,1,0rYuECbqGwpo9IjFKJrQQrfyzL5V_aEiqSXV25I7DeF4wuQn3I6z1sJCY1NkNJ-bjdpV8nVix_VDU0mK5tgrxuSQaSyrlVLxE5EiTf0Z8G4F&typo=1>
>
> *John Lilley *
>
> *Data Management Chief Architect, Redpoint Global Inc. *
>
> 34 Washington Street, Suite 205 Wellesley Hills, MA 02481
>
> *M: *+1 7209385761 <+1%207209385761> | john.lil...@redpointglobal.com
>
> *From:* Justin Bertram <jbert...@apache.org>
> *Sent:* Monday, January 15, 2024 9:56 PM
> *To:* users@activemq.apache.org
> *Subject:* Re: Trouble with Replication HA Master/Slave config performing
> failback
>
>
>
> **** [Caution] This email is from an external source. Please use caution
> responding, opening attachments or clicking embedded links. ****
>
>
>
> In the replication case you still need a PV in K8s, but I don't believe
> you need to put the whole journal on the PV (and suffer the performance
> penalty). You should just need to point the node-manager-lock-directory in
> broker.xml to the PV.
>
>
>
>
>
> Justin
>
>
>
> On Mon, Jan 15, 2024 at 4:35 PM Lino Pereira <
> lino.pere...@redpointglobal.com.invalid> wrote:
>
> Hi Justin,
>
> Based on your reply in the provided link, are you saying that even for the
> replication HA case, you still need an external FileShare PV, in K8s, so
> that the nodeID can be persisted between restarts. In this use case, is the
> performance of this FileShare less of a concern than it is for the
> shared-store use case?
>
> Thanks,
> Lino
>
>
>
> [image: rg]
> <https://linkprotect.cudasvc.com/url?a=https://www.redpointglobal.com/&c=E,1,wIHynQgNzLIzcwjariMyRLGUiOg2HFp89mhuKhW7jjTNJ_0g2kIAKOE2kLQ6HVoBn6I30wClJKDGGsiWjUqjAQcLF68WbhJztNf4ue2yZg,,&typo=1>
>
> *Lino Pereira *
>
> *C++ Developer, Redpoint Global Inc. *
>
> 34 Washington Street, Suite 205 Wellesley Hills, MA 02481
>
> lino.pere...@redpointglobal.com
>
> -----Original Message-----
> From: Justin Bertram <jbert...@apache.org>
> Sent: Friday, January 12, 2024 4:53 PM
> To: users@activemq.apache.org
> Subject: Re: Trouble with Replication HA Master/Slave config performing
> failback
>
> *** [Caution] This email is from an external source. Please use caution
> responding, opening attachments or clicking embedded links. ***
>
> I sent that email on December 15. You can find it on the web-based mailing
> list interface [1].
>
> Justin
>
> [1] https://lists.apache.org/thread/5bv74br0rx5nxgk816tvbq53y51vch1l
>
> On Fri, Jan 12, 2024 at 4:45 PM Shields, Paul Michael <
> paul.shie...@hpe.com>
> wrote:
>
> > Hi Justin,
> >
> > Forgive me but I don’t remember a journal persistence issue discussion. I
> > did an email search too and did not see that discussion either. Would you
> > please refresh me on the discussion? I may have inadvertently deleted
> that?
> >
> > I do see data being replicated to my production slave/standby instance on
> > initial startup of the two broker instances. Failover works as expected.
> > But when the primary instance starts backup I do not see the active slave
> > replicate to the starting primary and I do not see the active slave
> restart
> > and go to the backup server state.
> >
> > What is the communication sequence when the primary starts up after it
> > quit with the active backup broker? In other words, how is the active
> > backup broker notified to start replication to the newly started primary
> > broker?
> >
> > I ran the replicated-failback example again with TRACE level logging
> > turned on and I do see the attemptFailback=false message in the
> > target/server1/log/artemis.log file at startup, so it must be changed
> later
> > or am I not understanding the purpose of this flag?
> >
> > 2024-01-12 14:40:18,341 DEBUG
> > [org.apache.activemq.artemis.core.server.cluster.BackupManager] ******
> > BackupManager connecting to DiscoveryBackupConnector
> > [group=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=5000,
> > discoveryInitialWaitTimeout=10000}]
> >
> > 2024-01-12 14:40:18,341 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Starting Backup Server
> >
> > 2024-01-12 14:40:18,341 INFO [org.apache.activemq.artemis.core.server]
> > AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.28.0 [null]
> > started, waiting live to fail before it gets active
> >
> > 2024-01-12 14:40:18,341 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Setting server state as started
> >
> > 2024-01-12 14:40:18,341 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > looking up the node through nodeLocator.locateNode()
> >
> > 2024-01-12 14:40:18,341 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Connecting towards a possible live, connection
> > information=Pair[a=TransportConfiguration(name=netty-connector,
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > ?port=61616&host=localhost, b=null],
> > nodeID=ab94924c-9f4e-11ee-95ee-569984593a04
> >
> > 2024-01-12 14:40:18,341 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > attemptFailback=false, nodeID=ab94924c-9f4e-11ee-95ee-569984593a04
> >
> > 2024-01-12 14:40:18,341 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Calling
> >
> clusterController.connectToNodeInReplicatedCluster(TransportConfiguration(name=netty-connector,
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > ?port=61616&host=localhost)
> >
> > From my production slave broker log
> >
> >
> > 024-01-11 18:24:14,082 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Starting Backup Server
> >
> > 2024-01-11 18:24:14,083 INFO [org.apache.activemq.artemis.core.server]
> > AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.28.0 [null]
> > started, waiting live to fail before it gets active
> >
> > 2024-01-11 18:24:14,083 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Setting server state as started
> >
> > 2024-01-11 18:24:14,083 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > looking up the node through nodeLocator.locateNode()
> >
> > 2024-01-11 18:24:14,083 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Connecting towards a possible live, connection
> > information=Pair[a=TransportConfiguration(name=artemis,
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > b=null], nodeID=86d380d3-b0ae-11ee-9cff-7abbf21b3434
> >
> > 2024-01-11 18:24:14,083 DEBUG
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > attemptFailback=false, nodeID=86d380d3-b0ae-11ee-9cff-7abbf21b3434
> >
> > 2024-01-11 18:24:14,083 TRACE
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > Calling
> >
> clusterController.connectToNodeInReplicatedCluster(TransportConfiguration(name=artemis,
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local)
> >
> > Thanks,
> > Paul
> > From: Justin Bertram <jbert...@apache.org>
> > Date: Friday, January 12, 2024 at 3:38 PM
> > To: users@activemq.apache.org <users@activemq.apache.org>
> > Subject: Re: Trouble with Replication HA Master/Slave config performing
> > failback
> > > Is it possible to use jgroups instead of UDP for the cluster connection
> > in the example?
> >
> > Yes. Check out examples/features/clustered/clustered-jgroups for an
> example
> > configuration. Keep in mind that UDP, JGroups, etc. is *only* used for
> > initial discovery. Once brokers are discovered then normal TCP
> connections
> > are created.
> >
> > Before I continue looking through your logs, etc. can you confirm that
> you
> > resolved the journal persistence issue I described in my previous email?
> If
> > that's not resolved there's no chance for failback to occur.
> >
> >
> > Justin
> >
> >
> >
> > On Thu, Jan 11, 2024 at 3:09 PM Shields, Paul Michael <
> > paul.shie...@hpe.com>
> > wrote:
> >
> > > Hi Justin,
> > >
> > > I tried the example and it works. I turned up logging to get a success
> > > path through the code to maybe see where things get de-railed. Is it
> > > possible to use jgroups instead of UDP for the cluster connection in
> the
> > > example? That would give me a better success signature.
> > >
> > > My standby instance has this in the broker.xml which is the same as in
> > the
> > > example:
> > >
> > > <ha-policy>
> > >
> > > <replication>
> > >
> > > <slave>
> > >
> > > <allow-failback>true</allow-failback>
> > >
> > > </slave>
> > >
> > > </replication>
> > >
> > > </ha-policy>
> > >
> > > I am seeing this series of log messages that suggests that failback is
> > not
> > > enabled. Is the attemptFailback=false the default at this point in
> > startup
> > > and it is changed later or it there something gone wrong at this point?
> > >
> > > 2024-01-11 18:24:14,066 TRACE
> > > [org.apache.activemq.artemis.core.client.impl.Topology]
> Topology@7e0a913c
> > [owner=ServerLocatorImpl
> > > [initialConnectors=[TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > >
> >
> discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='my-discovery-group',
> > > refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] informing
> > > QuorumManager(server=null) about node up =
> > > 86d380d3-b0ae-11ee-9cff-7abbf21b3434 connector =
> > > Pair[a=TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > b=null]
> > >
> > > 2024-01-11 18:24:14,067 TRACE
> > > [org.apache.activemq.artemis.core.client.impl.Topology]
> Topology@7e0a913c
> > [owner=ServerLocatorImpl
> > > [initialConnectors=[TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > >
> >
> discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='my-discovery-group',
> > > refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] informing
> > >
> >
> org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@43d39048
> > > <mailto:
> <%0b>> >
> >
> org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@43d39048
> > >
> > > about node up = 86d380d3-b0ae-11ee-9cff-7abbf21b3434 connector =
> > > Pair[a=TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > b=null]
> > >
> > > 2024-01-11 18:24:14,067 DEBUG
> > > [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl]
> > > ClientSessionFactoryImpl received backup update for live/backup pair =
> > > TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local
> > > / null but it didn't belong to TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local
> > >
> > > 2024-01-11 18:24:14,067 TRACE
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Cluster Connected
> > >
> > > 2024-01-11 18:24:14,068 DEBUG
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Starting backup manager
> > >
> > > 2024-01-11 18:24:14,069 DEBUG
> > > [org.apache.activemq.artemis.core.server.cluster.BackupManager] deploy
> > > backup config ClusterConnectionConfiguration{name='my-cluster',
> > address='',
> > > connectorName='artemis', clientFailureCheckPeriod=30000,
> > > connectionTTL=60000, retryInterval=1000, retryIntervalMultiplier=2.0,
> > > maxRetryInterval=32000, initialConnectAttempts=20,
> reconnectAttempts=-1,
> > > callTimeout=30000, callFailoverTimeout=-1, duplicateDetection=true,
> > > messageLoadBalancingType=ON_DEMAND, compositeMembers=null,
> > > staticConnectors=[], discoveryGroupName='my-discovery-group',
> maxHops=1,
> > > confirmationWindowSize=10485760, allowDirectConnectionsOnly=false,
> > > minLargeMessageSize=102400, clusterNotificationInterval=1000,
> > > clusterNotificationAttempts=2}
> > >
> > > 2024-01-11 18:24:14,081 DEBUG
> > > [org.apache.activemq.artemis.core.server.cluster.BackupManager] ******
> > > BackupManager connecting to DiscoveryBackupConnector
> > > [group=DiscoveryGroupConfiguration{name='my-discovery-group',
> > > refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]
> > >
> > > 2024-01-11 18:24:14,082 DEBUG
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Starting Backup Server
> > >
> > > 2024-01-11 18:24:14,083 INFO [org.apache.activemq.artemis.core.server]
> > > AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.28.0 [null]
> > > started, waiting live to fail before it gets active
> > >
> > > 2024-01-11 18:24:14,083 TRACE
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Setting server state as started
> > >
> > > 2024-01-11 18:24:14,083 TRACE
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > looking up the node through nodeLocator.locateNode()
> > >
> > > 2024-01-11 18:24:14,083 DEBUG
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Connecting towards a possible live, connection
> > > information=Pair[a=TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > b=null], nodeID=86d380d3-b0ae-11ee-9cff-7abbf21b3434
> > >
> > > 2024-01-11 18:24:14,083 DEBUG
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > attemptFailback=false, nodeID=86d380d3-b0ae-11ee-9cff-7abbf21b3434
> > >
> > > 2024-01-11 18:24:14,083 TRACE
> > >
> >
> [org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation]
> > > Calling
> > >
> >
> clusterController.connectToNodeInReplicatedCluster(TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local)
> > >
> > > 2024-01-11 18:24:14,083 TRACE
> > > [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl]
> > > getConnectionWithRetry::0 with retryInterval = 1000 multiplier = 2.0
> > >
> > > java.lang.Exception:trace <ftp://java.lang.exception/trace>
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fClientSessionFactoryImpl.java%3a846&c=E,1,CCRYD9asOwUO9WfztF7iYeTstIMcTzC1PzToDMj0EbrmX2UCrDAtORKi3KkbIeNCyngS9b8yN3Espjh69EqwU7zCBnH8r4y0QR9JRY_jmA,,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ClientSessionFactoryImpl.java:846&c=E,1,CCRYD9asOwUO9WfztF7iYeTstIMcTzC1PzToDMj0EbrmX2UCrDAtORKi3KkbIeNCyngS9b8yN3Espjh69EqwU7zCBnH8r4y0QR9JRY_jmA,,&typo=1>)
> > > ~[artemis-core-client-2.28.0.jar:2.28.0
> <ftp://artemis-core-client-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fClientSessionFactoryImpl.java%3a252&c=E,1,UWnIwEtYN-Os30noUm80TEjKQiOmsXl1jXnQy4rkVg_Mm8pEjR42vAnpDjuGLxaY-t6mzoLNPTHhGijPkTpytvNjie3IC5OpNppGu6mvWofV&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ClientSessionFactoryImpl.java:252&c=E,1,UWnIwEtYN-Os30noUm80TEjKQiOmsXl1jXnQy4rkVg_Mm8pEjR42vAnpDjuGLxaY-t6mzoLNPTHhGijPkTpytvNjie3IC5OpNppGu6mvWofV&typo=1>)
> > > ~[artemis-core-client-2.28.0.jar:2.28.0
> <ftp://artemis-core-client-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fServerLocatorImpl.java%3a610&c=E,1,vriKlmcWPN4P5DVpudFT4BD6wYmRsznKQY-aJUx5dUDeuqbQ0B2VpgXUS0rVx_3I71s9Y8t9XsgeIR1ZeyEBM2jnHdASl70n9RRWcouCee6yazcDqAIo&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ServerLocatorImpl.java:610&c=E,1,vriKlmcWPN4P5DVpudFT4BD6wYmRsznKQY-aJUx5dUDeuqbQ0B2VpgXUS0rVx_3I71s9Y8t9XsgeIR1ZeyEBM2jnHdASl70n9RRWcouCee6yazcDqAIo&typo=1>)
> > > ~[artemis-core-client-2.28.0.jar:2.28.0
> <ftp://artemis-core-client-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fServerLocatorImpl.java%3a628&c=E,1,yZA0ipuWpmPIBY8F3Y6JTV4_GbJo7sPd9oUJykEzBTWOClB24oHhFNGyNJVyU4q2vKiCq8EI6pqavdcUQ4YeYkQJJ-sdYdrA-9ozNVsiWN0,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ServerLocatorImpl.java:628&c=E,1,yZA0ipuWpmPIBY8F3Y6JTV4_GbJo7sPd9oUJykEzBTWOClB24oHhFNGyNJVyU4q2vKiCq8EI6pqavdcUQ4YeYkQJJ-sdYdrA-9ozNVsiWN0,&typo=1>)
> > > ~[artemis-core-client-2.28.0.jar:2.28.0
> <ftp://artemis-core-client-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.server.cluster.ClusterController.connectToNodeInReplicatedCluster(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fClusterController.java%3a313&c=E,1,hQdcc7oc1tjMYHgfz96Agje1LexFtj3XPS7thuEqXZ0nb-a9oo4CebgIK5y6J6mpQZgBEit92vAGgbN0AxU514WfP1gx_8TZqGr4ScoNNW-rkzGKcDQPSTY,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ClusterController.java:313&c=E,1,hQdcc7oc1tjMYHgfz96Agje1LexFtj3XPS7thuEqXZ0nb-a9oo4CebgIK5y6J6mpQZgBEit92vAGgbN0AxU514WfP1gx_8TZqGr4ScoNNW-rkzGKcDQPSTY,&typo=1>)
> > > ~[artemis-server-2.28.0.jar:2.28.0
> <ftp://artemis-server-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.tryConnectToNodeInReplicatedCluster(
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fSharedNothingBackupActivation.java%3a348&c=E,1,i8Ok1EuiD_R938NmaeR8WDOMKGg9vgyUWMRtpW_pJ3eYEKZHcAwICLV2rEh1XaYhPfrlVpNgyzxMr9ycx65H-0UWqEi5Wg3fgsndcGZyuIdTsNgpyvvXD76mefY,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://SharedNothingBackupActivation.java:348&c=E,1,i8Ok1EuiD_R938NmaeR8WDOMKGg9vgyUWMRtpW_pJ3eYEKZHcAwICLV2rEh1XaYhPfrlVpNgyzxMr9ycx65H-0UWqEi5Wg3fgsndcGZyuIdTsNgpyvvXD76mefY,&typo=1>)
> > > ~[artemis-server-2.28.0.jar:2.28.0
> <ftp://artemis-server-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> >
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2forg.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run&c=E,1,tQLZyyhJ9jYhtxt5fWofp5RfJOuoNSMT9BrpaC1Kfq-XAO8pGRZaU_gpz9u2opMkMpr-C2SmX2n_Y96jt8rsp5g4EdC8qXoj8i8mjSB2EyonliwDOUod-Q,,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run&c=E,1,tQLZyyhJ9jYhtxt5fWofp5RfJOuoNSMT9BrpaC1Kfq-XAO8pGRZaU_gpz9u2opMkMpr-C2SmX2n_Y96jt8rsp5g4EdC8qXoj8i8mjSB2EyonliwDOUod-Q,,&typo=1>
> (https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fSharedNothingBackupActivation.java%3a199&c=E,1,3aR-ZwQ0L12WO6RcuG85ekB-kFU_DBCFf8XP3VkQwsiRzaKP0Dr7yyr1D_SIRByeDOfjSXZS-sk01qGGFSbY4
> rdD44bwCYRLdceWEwpKomO4blh1UmHfapKb&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://SharedNothingBackupActivation.java:199&c=E,1,3aR-ZwQ0L12WO6RcuG85ekB-kFU_DBCFf8XP3VkQwsiRzaKP0Dr7yyr1D_SIRByeDOfjSXZS-sk01qGGFSbY4rdD44bwCYRLdceWEwpKomO4blh1UmHfapKb&typo=1>)
> > > ~[artemis-server-2.28.0.jar:2.28.0
> <ftp://artemis-server-2.28.0.jar/2.28.0>]
> > >
> > > at
> > >
> > org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fActivationThread.run&c=E,1,E_Lo507fBrshPuy7zCANtA8BFPJ8Dqdfs6ai5o8MUPXHFZrxHGvfWmNiMihyA9Z34jHm93WN2awSavXmoqRLEsNKcLsV91wPT-p6mX28_Zg3s0Z0F3mZYkDdHQdK&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ActivationThread.run&c=E,1,E_Lo507fBrshPuy7zCANtA8BFPJ8Dqdfs6ai5o8MUPXHFZrxHGvfWmNiMihyA9Z34jHm93WN2awSavXmoqRLEsNKcLsV91wPT-p6mX28_Zg3s0Z0F3mZYkDdHQdK&typo=1>
> (
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fActiveMQServerImpl.java%3a4473&c=E,1,K6a3M2nqN4-XbzaIFsZcqt65q4f8RCxlytHn-3pKsz5nMyS3M4B8RT0M4eOPJv9HotCE27zfFlucw9ZliuGmoQXbV5t7kRaRNEJjwCwwSF-HHrjzwlECJfssqEo,&typo=1
> <https://linkprotect.cudasvc.com/url?a=https://ActiveMQServerImpl.java:4473&c=E,1,K6a3M2nqN4-XbzaIFsZcqt65q4f8RCxlytHn-3pKsz5nMyS3M4B8RT0M4eOPJv9HotCE27zfFlucw9ZliuGmoQXbV5t7kRaRNEJjwCwwSF-HHrjzwlECJfssqEo,&typo=1>)
> > > ~[artemis-server-2.28.0.jar:2.28.0
> <ftp://artemis-server-2.28.0.jar/2.28.0>]
> > >
> > > 2024-01-11 18:24:14,084 DEBUG
> > > [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl]
> > > Trying reconnection attempt 0/0
> > >
> > > 2024-01-11 18:24:14,084 DEBUG
> > > [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl]
> > > Trying to connect with
> > >
> >
> connectorFactory=org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory@66996f18
> > > <mailto:connectorFactory
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory@66996f18
> > >
> > > and currentConnectorConfig: TransportConfiguration(name=artemis,
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local
> > >
> > > 2024-01-11 18:24:14,084 DEBUG
> > > [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector]
> > > Connector NettyConnector
> > > [host=cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local,
> > > port=61616, httpEnabled=false, httpUpgradeEnabled=false,
> > useServlet=false,
> > > servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true]
> > > using native epoll
> > >
> > > 2024-01-11 18:24:14,084 DEBUG
> > > [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector]
> > > Started EPOLL Netty Connector version 4.1.86.Final to
> > > cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:61616
> <ftp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/61616>
> > >
> > > 2024-01-11 18:24:14,084 DEBUG
> > > [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector]
> > > Remote destination:
> > > cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/
> > > 10.44.0.25:61616 <ftp://10.44.0.25/61616>
> > >
> > >
> > > Thanks,
> > > Paul
> > >
> > > From: Justin Bertram <jbert...@apache.org>
> > > Date: Wednesday, December 13, 2023 at 3:23 PM
> > > To: users@activemq.apache.org <users@activemq.apache.org>
> > > Subject: Re: Trouble with Replication HA Master/Slave config performing
> > > failback
> > > ActiveMQ Artemis ships with an example in the
> > > examples/features/ha/replicated-failback directory that demonstrates
> how
> > > this works. I ran this example and it worked fine. I also started up
> the
> > > brokers manually and triggered failover and failback. Again, everything
> > > worked fine. My only guess is that there's something configured wrong
> or
> > > perhaps there's an environmental issue that's preventing failback from
> > > working properly for you. If you could provide clear steps on how to
> > > reproduce this issue I could investigate further.
> > >
> > >
> > > Justin
> > >
> > > On Fri, Dec 8, 2023 at 3:45 PM Shields, Paul Michael <
> > paul.shie...@hpe.com
> > > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I am having an issue with failback to the master broker. I start the
> > > > master broker then the slave broker. I see the replication from the
> > > master
> > > > to the slave happen. I stop the master/active broker and a failover
> > > happens
> > > > to the slave as expected. When the master is started a failback to it
> > is
> > > > expected but does not happen. The following is my ha-policy config.
> > > Did I
> > > > miss something in the ha-policy configuration? We are using version
> > > 2.28.0
> > > > and java17.
> > > >
> > > > Thanks,
> > > > Paul
> > > >
> > > >
> > > > Master broker.xml config
> > > >
> > > > <ha-policy>
> > > >
> > > > <replication>
> > > >
> > > > <master>
> > > >
> > > > <check-for-live-server>true</check-for-live-server>
> > > >
> > > > </master>
> > > >
> > > > </replication>
> > > >
> > > > </ha-policy>
> > > >
> > > > Slave broker.xml config
> > > >
> > > > <ha-policy>
> > > >
> > > > <replication>
> > > >
> > > > <slave>
> > > >
> > > > <allow-failback>true</allow-failback>
> > > >
> > > > </slave>
> > > >
> > > > </replication>
> > > >
> > > > </ha-policy>
> > > >
> > > > The following are the logs from the master and slave brokers
> > > > HA “master” broker log after restart
> > > > 2023-12-08 20:42:57,509 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221020: Started EPOLL Acceptor at
> > > > cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:1883
> <ftp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/1883>
> > for
> > > > protocols [MQTT]
> > > > 2023-12-08 20:42:57,534 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221020: Started EPOLL Acceptor at
> > > >
> cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:61616
> <ftp://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/61616>
> > for
> > > > protocols [CORE]
> > > > 2023-12-08 20:42:57,534 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221007: Server is now live
> > > > 2023-12-08 20:42:57,534 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.28.0
> > > > [amq-broker, nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f]
> > > > 2023-12-08 20:42:57,539 INFO [org.apache.activemq.artemis] AMQ241003:
> > > > Starting embedded web server
> > > > 2023-12-08 20:42:57,625 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 15, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,627 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841007:
> > created
> > > > queue:
> > > >
> > >
> >
> QueueImpl[name=$.artemis.internal.sf.my-cluster.9759f092-9609-11ee-9862-e26c98ea5364,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@15525ed3
> > > > 2023-12-08 20:42:57,657 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=47edf16f, clientID=null,
> > > > nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@556a5ff2
> > > > [ID=47edf16f<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@556a5ff2
> > > [ID=47edf16f>,
> > > > local= /10.44.0.18:61616, remote=/10.44.0.18:34508]]
> > > > 2023-12-08 20:42:57,720 INFO
> > > > [org.apache.activemq.hawtio.branding.PluginContextListener]
> Initialized
> > > > activemq-branding plugin
> > > > 2023-12-08 20:42:57,737 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221027: Bridge ClusterConnectionBridge@1bb2e6ff
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.9759f092-9609-11ee-9862-e26c98ea5364,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@15525ed3 targetConnector=ServerLocatorImpl
> > > >
> (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1bb2e6ff
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.9759f092-9609-11ee-9862-e26c98ea5364,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@15525ed3 targetConnector=ServerLocatorImpl
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-1-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1409513883
> > > [nodeUUID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > connector=TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > > address=, server=ActiveMQServerImpl::name=amq-broker]))
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-1-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]] is connected
> > > > 2023-12-08 20:42:57,755 INFO
> > > > [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized
> > > > artemis-plugin plugin
> > > > 2023-12-08 20:42:57,766 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 18, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,767 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 20, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,968 INFO [io.hawt.HawtioContextListener]
> > > Initialising
> > > > hawtio services
> > > > 2023-12-08 20:42:57,970 INFO [io.hawt.system.ConfigManager]
> > > Configuration
> > > > will be discovered via system properties
> > > > 2023-12-08 20:42:57,971 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to
> > > > Hawtio 2.15.0
> > > > 2023-12-08 20:42:57,975 INFO
> > > > [io.hawt.web.auth.AuthenticationConfiguration] Starting hawtio
> > > > authentication filter, JAAS realm: "activemq" authorized role(s):
> > "admin"
> > > > role principal classes:
> > > > "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
> > > > 2023-12-08 20:42:57,979 INFO [io.hawt.web.auth.LoginRedirectFilter]
> > > > Hawtio loginRedirectFilter is using 1800 sec. HttpSession timeout
> > > > 2023-12-08 20:42:57,986 INFO [io.hawt.web.proxy.ProxyServlet] Proxy
> > > > servlet is disabled
> > > > 2023-12-08 20:42:57,989 INFO
> > > > [io.hawt.web.servlets.JolokiaConfiguredAgentServlet] Jolokia
> overridden
> > > > property: [key=policyLocation,
> > > > value=file:/home/jboss/amq-broker/etc/jolokia-access.xml]
> > > > 2023-12-08 20:42:58,058 INFO [org.apache.activemq.artemis] AMQ241001:
> > > > HTTP Server started at
> > > >
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161
> > >
> > > <
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161
> > >
> > > >
> > > > 2023-12-08 20:42:58,058 INFO [org.apache.activemq.artemis] AMQ241002:
> > > > Artemis Jolokia REST API available at
> > > >
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia
> > >
> > > <
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia
> > >
> > > >
> > > > 2023-12-08
> > > > <
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia2023-12-08
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console/jolokia2023-12-08
> > >
> > > >
> > > > 20:42:58,058 INFO [org.apache.activemq.artemis] AMQ241004: Artemis
> > > Console
> > > > available at
> > > >
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console
> > >
> > > <
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console
> > >
> > > >
> > > > 2023-12-08
> > > > <
> > >
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console2023-12-08
> > <
> >
> http://cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:8161/console2023-12-08
> > >
> > > >
> > > > 20:42:58,455 INFO
> > [org.apache.activemq.artemis.core.server.plugin.impl]
> > > > AMQ841000: created connection: RemotingConnectionImpl [ID=f85ec51b,
> > > > clientID=null, nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@1cb2285d
> > > > [ID=f85ec51b<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@1cb2285d
> > > [ID=f85ec51b>,
> > > > local= /10.44.0.18:61616, remote=/127.0.0.6:49865]]
> > > > 2023-12-08 20:42:59,524 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=7cef23a2, clientID=null,
> > > > nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@5f45858
> > > > [ID=7cef23a2<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@5f45858
> > > [ID=7cef23a2>,
> > > > local= /10.44.0.18:61616, remote=/127.0.0.6:59691]]
> > > > 2023-12-08 20:42:59,535 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 22, result: NO_BINDINGS
> > > > 2023-12-08 20:42:59,536 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841002:
> > created
> > > > session name: 5fbf845a-960a-11ee-8014-12a7ed0622c7, session
> > connectionID:
> > > > 7cef23a2
> > > > 2023-12-08 20:42:59,549 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 24, result: NO_BINDINGS
> > > > 2023-12-08 20:42:59,550 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841002:
> > created
> > > > session name: 5fc2919b-960a-11ee-8014-12a7ed0622c7, session
> > connectionID:
> > > > 7cef23a2
> > > > 2023-12-08 20:42:59,583 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 26, result: OK
> > > > 2023-12-08 20:42:59,584 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841007:
> > created
> > > > queue:
> > > >
> > >
> >
> QueueImpl[name=notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=true]@5f46053b
> > > > 2023-12-08 20:42:59,593 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841005:
> > created
> > > > consumer with ID: 0, with session name:
> > > 5fc2919b-960a-11ee-8014-12a7ed0622c7
> > > > 2023-12-08 20:42:59,594 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 28, result: OK
> > > > 2023-12-08 20:42:59,616 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841009: sent
> > > > message with ID: 29, result: OK, transaction: UNAVAILABLE
> > > > 2023-12-08 20:42:59,615 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[31]:NON-RELIABLE:CoreMessage[messageID=31,durable=false,userID=null,priority=0,
> > > > timestamp=0,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,size=248,properties=TypedProperties[
> *AMQ*RESET_QUEUE_DATA=true]]@967407085, > > > with transaction: null
> > > > 2023-12-08 20:42:59,621 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 31, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5fc2919b-960a-11ee-8014-12a7ed0622c7,
> consumerID: 0
> > > > 2023-12-08 20:42:59,621 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[32]:NON-RELIABLE:CoreMessage[messageID=32,durable=false,userID=null,priority=0,
> > > > timestamp=Fri Dec 08 20:42:59 UTC 2023,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,size=684,properties=TypedProperties[
> *AMQ*RoutingName=ExpiryQueue,*AMQ*Distance=0,*AMQ*Address=ExpiryQueue,
> *AMQ*NotifType=BINDING_ADDED,*AMQ*Binding_ID=7,*AMQ*
> FilterString=NULL-value,*AMQ*NotifTimestamp=1702068179613,*AMQ*ClusterName=ExpiryQueue59d00b93-960a-11ee-b571-7ab0027aea5f]]@1373537114,
> > > > with transaction: null
> > > > 2023-12-08 20:42:59,622 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 32, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5fc2919b-960a-11ee-8014-12a7ed0622c7,
> consumerID: 0
> > > > 2023-12-08 20:42:59,622 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[35]:NON-RELIABLE:CoreMessage[messageID=35,durable=false,userID=null,priority=0,
> > > > timestamp=Fri Dec 08 20:42:59 UTC 2023,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,size=636,properties=TypedProperties[
> *AMQ*RoutingName=DLQ,*AMQ*Distance=0,*AMQ*Address=DLQ,*AMQ*
> NotifType=BINDING_ADDED,*AMQ*Binding_ID=3,*AMQ*FilterString=NULL-value,
> *AMQ*NotifTimestamp=1702068179614,*AMQ*ClusterName=DLQ59d00b93-960a-11ee-b571-7ab0027aea5f]]@958347658,
> > > > with transaction: null
> > > > 2023-12-08 20:42:59,622 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 35, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5fc2919b-960a-11ee-8014-12a7ed0622c7,
> consumerID: 0
> > > > 2023-12-08 20:42:59,622 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[39]:NON-RELIABLE:CoreMessage[messageID=39,durable=false,userID=null,priority=0,
> > > > timestamp=0,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,size=266,properties=TypedProperties[
> *AMQ*RESET_QUEUE_DATA_COMPLETE=true]]@333704293, > > > with transaction:
> null
> > > > 2023-12-08 20:42:59,622 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 39, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5fc32ddc-960a-11ee-8014-12a7ed0622c7.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5fc2919b-960a-11ee-8014-12a7ed0622c7,
> consumerID: 0
> > > > 2023-12-08 20:43:08,447 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=1d1d79ce, clientID=null,
> > > > nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@51aa6256
> > > > [ID=1d1d79ce<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@51aa6256
> > > [ID=1d1d79ce>,
> > > > local= /10.44.0.18:61616, remote=/127.0.0.6:60787]]
> > > > 2023-12-08 20:43:08,464 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=8f88c6d5, clientID=null,
> > > > nodeID=59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@14a6e5e7
> > > > [ID=8f88c6d5<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@14a6e5e7
> > > [ID=8f88c6d5>,
> > > > local= /10.44.0.18:61616, remote=/127.0.0.6:40065]]
> > > >
> > > >
> > > > HA “slave” broker log.
> > > >
> > > > 2023-12-08 20:37:58,374 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.28.0
> [null]
> > > > started, waiting live to fail before it gets active
> > > >
> > > > 2023-12-08 20:37:59,020 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221024: Backup server ActiveMQServerImpl::name=amq-broker is
> > > > synchronized with live server,
> > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364.
> > > >
> > > > 2023-12-08 20:38:00,032 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221031: backup announced
> > > > 2023-12-08 20:42:27,415 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221066: Initiating quorum vote: LiveFailoverQuorumVote
> > > > 2023-12-08 20:42:27,416 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221084: Requested 0 quorum votes
> > > > 2023-12-08 20:42:27,416 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221083: ignoring quorum vote as max cluster size is 1.
> > > > 2023-12-08 20:42:27,416 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221071: Failing over based on quorum vote results.
> > > > 2023-12-08 20:42:27,431 WARN
> [org.apache.activemq.artemis.core.client]
> > > > AMQ212037: Connection failure to
> > > > cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/
> > > > 10.44.0.20:61616 <ftp://10.44.0.20/61616> has been detected:
> AMQ219015: The connection was
> > > > disconnected because of server shutdown [code=DISCONNECTED]
> > > > 2023-12-08 20:42:27,431 WARN
> [org.apache.activemq.artemis.core.client]
> > > > AMQ212037: Connection failure to
> > > > cray-dvs-mqtt-ss-0.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/
> > > > 10.44.0.20:61616 <ftp://10.44.0.20/61616> has been detected:
> AMQ219015: The connection was
> > > > disconnected because of server shutdown [code=DISCONNECTED]
> > > > 2023-12-08 20:42:27,434 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221037: ActiveMQServerImpl::name=amq-broker to become 'live'
> > > > 2023-12-08 20:42:27,574 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221080: Deploying address DLQ supporting [ANYCAST]
> > > > 2023-12-08 20:42:27,575 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221003: Deploying ANYCAST queue DLQ on address DLQ
> > > > 2023-12-08 20:42:27,578 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST]
> > > > 2023-12-08 20:42:27,578 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue
> > > > 2023-12-08 20:42:27,611 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221007: Server is now live
> > > > 2023-12-08 20:42:27,699 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221020: Started EPOLL Acceptor at
> > > > cray-dvs-mqtt-ss-1.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:1883
> <ftp://cray-dvs-mqtt-ss-1.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/1883>
> > for
> > > > protocols [MQTT]
> > > > 2023-12-08 20:42:27,717 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221020: Started EPOLL Acceptor at
> > > >
> cray-dvs-mqtt-ss-1.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local:61616
> <ftp://cray-dvs-mqtt-ss-1.cray-dvs-mqtt-hdls-svc.dvs.svc.cluster.local/61616>
> > for
> > > > protocols [CORE]
> > > > 2023-12-08 20:42:51,946 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=df357d0e, clientID=null,
> > > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@4cbed077
> > > > [ID=df357d0e<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@4cbed077
> > > [ID=df357d0e>,
> > > > local= /10.40.0.106:61616, remote=/127.0.0.6:56167]]
> > > > 2023-12-08 20:42:56,977 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841001:
> > > destroyed
> > > > connection: RemotingConnectionImpl [ID=df357d0e, clientID=null,
> > > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@4cbed077
> > > > [ID=df357d0e<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@4cbed077
> > > [ID=df357d0e>,
> > > > local= /10.40.0.106:61616, remote=/127.0.0.6:56167]]
> > > > 2023-12-08 20:42:57,443 WARN
> [org.apache.activemq.artemis.core.client]
> > > > AMQ212004: Failed to connect to server.
> > > > 2023-12-08 20:42:57,445 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 26, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,447 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841007:
> > created
> > > > queue:
> > > >
> > >
> >
> QueueImpl[name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@737e64a6
> > > > 2023-12-08 20:42:57,512 WARN
> [org.apache.activemq.artemis.core.server]
> > > > AMQ222186: unable to authorise cluster control: AMQ219016: Connection
> > > > failure detected. Unblocking a blocking call that will never get a
> > > response
> > > > 2023-12-08 20:42:57,513 WARN
> [org.apache.activemq.artemis.core.server]
> > > > AMQ224091: Bridge ClusterConnectionBridge@2a063f52
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@737e64a6 targetConnector=ServerLocatorImpl
> > > >
> (identity=(Cluster-connection-bridge::ClusterConnectionBridge@2a063f52
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@737e64a6 targetConnector=ServerLocatorImpl
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]]::ClusterConnectionImpl@139110008
> > > [nodeUUID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > > connector=TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-1-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > > address=, server=ActiveMQServerImpl::name=amq-broker]))
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]] is unable to connect to
> destination.
> > > > Retrying
> > > > 2023-12-08 20:42:57,629 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=483170f6, clientID=null,
> > > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@3df0e251
> > > > [ID=483170f6<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@3df0e251
> > > [ID=483170f6>,
> > > > local= /10.40.0.106:61616, remote=/127.0.0.6:55911]]
> > > > 2023-12-08 20:42:57,630 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=4514a2b3, clientID=null,
> > > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@f9b983b
> > > > [ID=4514a2b3<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@f9b983b
> > > [ID=4514a2b3>,
> > > > local= /10.40.0.106:61616, remote=/127.0.0.6:60199]]
> > > > 2023-12-08 20:42:57,648 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841000:
> > created
> > > > connection: RemotingConnectionImpl [ID=fcde4d78, clientID=null,
> > > > nodeID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > >
> > >
> >
> transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@16ef79c8
> > > > [ID=fcde4d78<mailto:transportConnection
> > > >
> > >
> >
> =org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection@16ef79c8
> > > [ID=fcde4d78>,
> > > > local= /10.40.0.106:61616, remote=/127.0.0.6:53221]]
> > > > 2023-12-08 20:42:57,663 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 29, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,663 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841002:
> > created
> > > > session name: 5ea190dc-960a-11ee-b571-7ab0027aea5f, session
> > connectionID:
> > > > fcde4d78
> > > > 2023-12-08 20:42:57,679 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 31, result: NO_BINDINGS
> > > > 2023-12-08 20:42:57,679 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841002:
> > created
> > > > session name: 5ea4ec3d-960a-11ee-b571-7ab0027aea5f, session
> > connectionID:
> > > > fcde4d78
> > > > 2023-12-08 20:42:57,708 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 33, result: OK
> > > > 2023-12-08 20:42:57,709 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841007:
> > created
> > > > queue:
> > > >
> > >
> >
> QueueImpl[name=notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=true]@3645037f
> > > > 2023-12-08 20:42:57,719 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841005:
> > created
> > > > consumer with ID: 0, with session name:
> > > 5ea4ec3d-960a-11ee-b571-7ab0027aea5f
> > > > 2023-12-08 20:42:57,720 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 35, result: OK
> > > > 2023-12-08 20:42:57,755 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841009: sent
> > > > message with ID: 36, result: OK, transaction: UNAVAILABLE
> > > > 2023-12-08 20:42:57,753 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[38]:NON-RELIABLE:CoreMessage[messageID=38,durable=false,userID=null,priority=0,
> > > > timestamp=0,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,size=248,properties=TypedProperties[
> *AMQ*RESET_QUEUE_DATA=true]]@2125985563, > > > with transaction: null
> > > > 2023-12-08 20:42:57,760 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 38, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5ea4ec3d-960a-11ee-b571-7ab0027aea5f,
> consumerID: 0
> > > > 2023-12-08 20:42:57,761 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[41]:NON-RELIABLE:CoreMessage[messageID=41,durable=false,userID=null,priority=0,
> > > > timestamp=Fri Dec 08 20:42:57 UTC 2023,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,size=636,properties=TypedProperties[
> *AMQ*RoutingName=DLQ,*AMQ*Distance=0,*AMQ*Address=DLQ,*AMQ*
> NotifType=BINDING_ADDED,*AMQ*Binding_ID=3,*AMQ*FilterString=NULL-value,
> *AMQ*NotifTimestamp=1702068177753,*AMQ*ClusterName=DLQ9759f092-9609-11ee-9862-e26c98ea5364]]@243253411,
> > > > with transaction: null
> > > > 2023-12-08 20:42:57,761 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 41, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5ea4ec3d-960a-11ee-b571-7ab0027aea5f,
> consumerID: 0
> > > > 2023-12-08 20:42:57,762 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[43]:NON-RELIABLE:CoreMessage[messageID=43,durable=false,userID=null,priority=0,
> > > > timestamp=Fri Dec 08 20:42:57 UTC 2023,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,size=684,properties=TypedProperties[
> *AMQ*RoutingName=ExpiryQueue,*AMQ*Distance=0,*AMQ*Address=ExpiryQueue,
> *AMQ*NotifType=BINDING_ADDED,*AMQ*Binding_ID=7,*AMQ*
> FilterString=NULL-value,*AMQ*NotifTimestamp=1702068177753,*AMQ*ClusterName=ExpiryQueue9759f092-9609-11ee-9862-e26c98ea5364]]@1544502860,
> > > > with transaction: null
> > > > 2023-12-08 20:42:57,762 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 43, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5ea4ec3d-960a-11ee-b571-7ab0027aea5f,
> consumerID: 0
> > > > 2023-12-08 20:42:57,762 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841014:
> > > > acknowledged message:
> > > >
> > >
> >
> Reference[44]:NON-RELIABLE:CoreMessage[messageID=44,durable=false,userID=null,priority=0,
> > > > timestamp=0,expiration=0, durable=false,
> > > >
> > >
> >
> address=notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,size=266,properties=TypedProperties[
> *AMQ*RESET_QUEUE_DATA_COMPLETE=true]]@1828250555, > > > with transaction:
> null
> > > > 2023-12-08 20:42:57,762 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841012:
> > > delivered
> > > > message with message ID: 44, to consumer on address:
> > > > activemq.notifications, queue:
> > > >
> > >
> >
> notif.5ea5af8e-960a-11ee-b571-7ab0027aea5f.ActiveMQServerImpl_name=amq-broker,
> > > > consumer sessionID: 5ea4ec3d-960a-11ee-b571-7ab0027aea5f,
> consumerID: 0
> > > > 2023-12-08 20:42:59,607 INFO
> [org.apache.activemq.artemis.core.server]
> > > > AMQ221027: Bridge ClusterConnectionBridge@2a063f52
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@737e64a6 targetConnector=ServerLocatorImpl
> > > >
> (identity=(Cluster-connection-bridge::ClusterConnectionBridge@2a063f52
> > > >
> > >
> >
> [name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > >
> > >
> >
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.59d00b93-960a-11ee-b571-7ab0027aea5f,
> > > > postOffice=PostOfficeImpl
> [server=ActiveMQServerImpl::name=amq-broker],
> > > > temp=false]@737e64a6 targetConnector=ServerLocatorImpl
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]]::ClusterConnectionImpl@139110008
> > > [nodeUUID=9759f092-9609-11ee-9862-e26c98ea5364,
> > > > connector=TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-1-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local,
> > > > address=, server=ActiveMQServerImpl::name=amq-broker]))
> > > > [initialConnectors=[TransportConfiguration(name=artemis,
> > > >
> > >
> >
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
> > > >
> > >
> >
> ?port=61616&host=cray-dvs-mqtt-ss-0-cray-dvs-mqtt-hdls-svc-dvs-svc-cluster-local],
> > > > discoveryGroupConfiguration=null]] is connected
> > > > 2023-12-08 20:42:59,625 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 46, result: OK
> > > > 2023-12-08 20:42:59,625 INFO
> > > > [org.apache.activemq.artemis.core.server.plugin.impl] AMQ841010:
> routed
> > > > message with ID: 48, result: OK
> > > > 2023-12-08 20:43:07,437 WARN
> [org.apache.activemq.artemis.core.server]
> > > > AMQ222186: unable to authorise cluster control: AMQ219016: Connection
> > > > failure detected. Unblocking a blocking call that will never get a
> > > response
> > > >
> > > >
> > > >
> > >
> >
>
> PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is
> confidential and is intended solely for the use of the individual(s) to
> whom it is addressed. If you believe you received this e-mail in error,
> please notify the sender immediately, delete the e-mail from your computer
> and do not copy, print or disclose it to anyone else. If you properly
> received this e-mail as a customer, partner or vendor of Redpoint, you
> should maintain its contents in confidence subject to the terms and
> conditions of your agreement(s) with Redpoint.
>
>
> PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is
> confidential and is intended solely for the use of the individual(s) to
> whom it is addressed. If you believe you received this e-mail in error,
> please notify the sender immediately, delete the e-mail from your computer
> and do not copy, print or disclose it to anyone else. If you properly
> received this e-mail as a customer, partner or vendor of Redpoint, you
> should maintain its contents in confidence subject to the terms and
> conditions of your agreement(s) with Redpoint.
>
>
> PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is
> confidential and is intended solely for the use of the individual(s) to
> whom it is addressed. If you believe you received this e-mail in error,
> please notify the sender immediately, delete the e-mail from your computer
> and do not copy, print or disclose it to anyone else. If you properly
> received this e-mail as a customer, partner or vendor of Redpoint, you
> should maintain its contents in confidence subject to the terms and
> conditions of your agreement(s) with Redpoint.
>
> PLEASE NOTE: This e-mail from Redpoint Global Inc. (“Redpoint”) is
> confidential and is intended solely for the use of the individual(s) to
> whom it is addressed. If you believe you received this e-mail in error,
> please notify the sender immediately, delete the e-mail from your computer
> and do not copy, print or disclose it to anyone else. If you properly
> received this e-mail as a customer, partner or vendor of Redpoint, you
> should maintain its contents in confidence subject to the terms and
> conditions of your agreement(s) with Redpoint.
>

Reply via email to