As JB says, you need to ensure that the messages are sent as persistent
messages and that the broker configures a persistence store whose data will
survive the restart of the container. I'll go into some detail about
various possible options, and if what I write doesn't go deep enough to
answer your questions, then let us know your specific configuration and we
can go deeper into your exact situation.

If you're using the JDBC store type to store persistent messages in a
database running outside of the container, then this is trivially easy,
since that database will remain running while the broker's container is
restarting, and you could even run the broker in active/passive mode (i.e.
you have a second container waiting to acquire the database lock and become
the active broker) for faster failover.

Similarly, if you're using the KahaDB store type and storing the data file
on a NFSv4 share, that data file will survive the container restart and
will be available for the next container to use.

If you're using KahaDB and writing to a local volume on the host where
ActiveMQ is running, and if the next ActiveMQ container will run on that
same host, then you're similarly safe in the case of container restarts.
But you're not protected from host failures; if that one host fails, the
persistence store data is lost. So if this is your configuration, you need
to think carefully about how you will handle that failure case.

If you're using KahaDB in a cloud environment and writing the data file to
a long-lived network-attached block store volume (e.g. EBS in AWS) that you
attach to a replacement host if the host fails, then that handles host
failures as well as container failures, though you still need a strategy
for handling what happens when the block store volume fails.

If you're using KahaDB and running your container in a container runtime
environment such as Kubernetes, where the next container can be scheduled
on a different host, then simply writing to a local disk won't cut it,
because the new container might run on a host that doesn't have that local
disk. In that case, you'd need to configure storage that allows the
container and the storage volume to be available on the same host, either
by constraining the container to run on the same host or by allowing the
storage volume to float around the cluster. The specifics of what options
are available will depend on which container runtime environment technology
you're using.

No matter what, you'll want to ensure that you're not writing KahaDB data
to a transient disk volume that will be discarded when the container exits.

Tim

On Thu, Jul 1, 2021, 1:32 AM Jean-Baptiste Onofre <j...@nanthrax.net> wrote:

> Hi,
>
> If you use persistent messages, the messages stays in the store waiting to
> be consumed (or moved to DLQ if it expires or redelivery exceeded).
>
> So, if you restart the ActiveMQ broker, the persistent messages will
> persist to the restart and available to be consumed.
>
> Regards
> JB
>
> > Le 1 juil. 2021 à 09:07, Jai Praful Ved <vedjaipra...@gmail.com> a
> écrit :
> >
> > Hello.
> > We are proposing to use Active MQ for job management for consumer
> service.
> > We have a question related to configuration on docker for ActiveMQ. The
> > question is if there are jobs published to the queue and ActiveMQ goes
> > down, then how will the published messages (those were readyto be
> consumed)
> > be restored when ActiveMQ container is restored again ?
> >
> > I hope I was able to explain the problem statement. I would be more happy
> > to share any clarification for this. This is my first post to the list
> and
> > perhaps this information would already be available, but i could fond it
> on
> > the community (perhaps my search keywords were not good enough).
> >
> > Thanks!
> > Best regards,
> > Jai Praful Ved
>
>

Reply via email to