On Mon, Apr 22, 2024 at 6:46 PM Justin Bertram wrote:
> This was caused by ARTEMIS-4582 [1]. I've opened ARTEMIS-4742 [2] and sent
> a PR [3] to address the problem.
>
> To be clear, this has nothing to do with replication or load/throughput.
> It's just a buffer formatting issue caused by a conf
to-delete`
> setting but it allows to change the `configuration-managed` setting, i.e.
>
> {"name":"MYQUEUE","configuration-managed":true}
>
> Regards,
> Domenico
>
> On Mon, 6 Nov 2023 at 17:55, Stephen Baker
> wrote:
>
> > Hello
Hello,
A problem I've hit a couple times is that we've forgotten to deploy
our updated addresses before deploying the applications that use them.
This ends up succeeding because auto-create is enabled, but the
addresses and queues are configured with auto-delete: true and
configuration-managed: f
We are performing some extended maintenance in our cold datacenter and
our artemis mirrors are backing up (currently around 2G of messages.)
What is the recommendation to "safely" disable the capture of new
events for mirroring / flush the mirrors queues.
Thanks,
Stephen E. Baker
--
*For more
We have a queue in our production system (on Artemis 2.27.1) that ended up
being used before it was defined in our broker.xml.
This meant that the address has the “Auto created: true” attribute, and the
queue under it ended up with “Auto delete: true” and “Configuration managed:
false”
We run
ew properties
> > files for custom modifications? but if you keep these variables in a
> > caller's that's pretty much all the same.
> >
> >
> > On Fri, Nov 18, 2022 at 10:28 AM Stephen Baker
> > wrote:
> > >
> > > My organization had
My organization had been using artemis.profile to define additional instance
parameters eg:
RAVE_MIRROR_CONNECTION_STR=tcp://artms1.atl.raveu.net:5672
RAVE_MIRROR_NAME=ATL
RAVE_MIRROR_USER=rave
RAVE_MIRROR_PASSWORD=password
RAVE_CONFIG_DIR=/rave/artemis/deploy/example
# Rave environment settings
once it exits.
>
> On Thu, 13 Oct 2022 at 18:52, Stephen Baker
> wrote:
> >
> > Because bin/artemis includes references to the jboss logmanager causing
> > artemis to fail on startup
> >
> > Diffing my two instances I see:
> > # Set Defaults Propert
client side.
I’m not sure also If you need a rate limiter or something like that also ?
On Fri, Oct 21, 2022 at 10:35 AM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> Are there any thoughts on how to achieve fair queuing with Artemis MQ.
>
> We have a limited resource (th
Are there any thoughts on how to achieve fair queuing with Artemis MQ.
We have a limited resource (the consumer) and many of our customers are
producing messages for it. We do not want a customer producing a lot of
messages from blocking a customer that only sends a few (basically the
definiti
Stephen Baker
wrote:
>
> Because bin/artemis includes references to the jboss logmanager causing
> artemis to fail on startup
>
> Diffing my two instances I see:
> # Set Defaults Properties
> ARTEMIS_LOGGING_CONF="$ARTEMIS_INSTANCE_ETC_URI/logging.propert
ivemq.apache.org
Subject: Re: Mirror compatibility across versions
I think the record would pile up unacked at the source mirror.
and @Stephen baker: sorry about my mistake on this fix...
Why would the upgrade be difficult on 2.27? it's just adding a
log4j2.properties.. everything else sho
ebert Suconic
wrote:
>
> I don't know how I would test it yet. It's fairly late in the night
> for me.. I will think about it tomorrow.
>
>
> but here is a tentative fix:
>
> https://github.com/apache/activemq-artemis/pull/4256
>
> On Wed, Oct 12, 2022 at 9:30
stack if this is not it?
it definitely needs fixing... I'm investigating it.
On Wed, Oct 12, 2022 at 6:05 PM Stephen Baker
wrote:
>
> Having updated both sides to 2.25 I’m seeing this error in the logs, is it a
> concern that warrants further investigation?
>
> artemis-test-ar
(ThreadPoolExecutor.java:628)
[java.base:]
artemis-test-artemis-1-m-1 |at
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
[artemis-commons-2.25.0.jar:]
From: Stephen Baker
Date: Wednesday, October 12, 2022 at 4:43 PM
To: users@activemq.apache.org
Date: Tuesday, October 11, 2022 at 3:24 PM
To: users@activemq.apache.org
Subject: Re: Mirror compatibility across versions
Yeah.. something like that... not necessarily in there though. but a
similar test.
On Tue, Oct 11, 2022 at 1:44 PM Stephen Baker
wrote:
>
> Ok, I agree based on a c
, 2022 at 10:06 AM Stephen Baker
wrote:
>
> We are planning our production upgrade from 2.20 to 2.25. These upgrades
> involve a loss of service in the window between stopping the live and when
> the backup instance becomes ready to process messages.
>
> I was wondering if the
We are planning our production upgrade from 2.20 to 2.25. These upgrades
involve a loss of service in the window between stopping the live and when the
backup instance becomes ready to process messages.
I was wondering if the mirror protocol is expected to be compatible between
those versions.
I traced this code quite recently because we needed to switch to using it for
one of our clients and also would have preferred specifying it on the consumer.
ClientSession in the core API exposes the windowSize as a parameter for
creating consumers, but the JMS shim ActiveMQSession does not.
Th
;t mirror and send to the same broker!
a Mirror is a mirror.. .if you send to that mirror, the send will then
update the mirror back.. that's a bit of a mess! Make it simple!
On Mon, Sep 19, 2022 at 8:46 AM Stephen Baker
wrote:
>
> Or would you suggest swapping the core bridg
Or would you suggest swapping the core bridges for additional broker connections
From: Stephen Baker
Date: Monday, September 19, 2022 at 8:38 AM
To: users@activemq.apache.org
Subject: Re: Dual Mirroring and Core Bridges
To isolate different sets of services they
n the servers) I would
simplify your setup with either one or the other.
On Fri, Sep 16, 2022 at 8:19 AM Stephen Baker
wrote:
>
> We are running artemis multiple artemis clusters between hot and cold
> datacenters in a dual mirroring setup, so:
>
> A mirrors with A’
> B mirrors w
ine, by setting it to log)
> >
> >
> > The critical analyzer will have no effect out of IO issues. Critical IO
> > issues will still stop the broker if they happen. The best action is to
> > disable it.
> >
> >
> >
> > On Fri, Sep 16, 2022 at 5:2
: Critical error sending large messages to mysql
I fixed some critical analyzer issues. You should upgrade.
Or just turn off the critical analyzer.
On Thu, Sep 15, 2022 at 5:01 PM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> The message size actually doesn’t matter at all. The p
We are running artemis multiple artemis clusters between hot and cold
datacenters in a dual mirroring setup, so:
A mirrors with A’
B mirrors with B’
We also have core bridges between A and B so:
A.forwardBfoo goes to B.foo
A’.forwardBfoo goes to B’.foo (by virtue of symmetric configuration)
B.f
-features-max-allowed-packet.html
?
--
Vilius
-Original Message-
From: Stephen Baker
Sent: Thursday, September 15, 2022 8:31 PM
To: users@activemq.apache.org
Subject: Re: Critical error sending large messages to mysql
To follow up, I have tracked this down to a bug in connector/j. I
31 PM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> To follow up, I have tracked this down to a bug in connector/j. I am
> working on a simple proof of concept and a ticket for them now.
>
> For any artemis mysql users that are curious, the problem only
broken in that case.
From: Stephen Baker
Date: Thursday, September 15, 2022 at 10:57 AM
To: users@activemq.apache.org
Subject: Critical error sending large messages to mysql
I don’t have exact reproduction steps yet (otherwise I would have filed an
issue), but on Artemis 2.22 using a mysql backed
I don’t have exact reproduction steps yet (otherwise I would have filed an
issue), but on Artemis 2.22 using a mysql backed journal our QA can reliably
send messages that crash the server:
```
2022-09-15 10:42:24,843 WARN [org.apache.activemq.artemis.journal] AMQ142021:
Error on IO callback, c
temis-cli/src/main/java/org/apache/activemq/artemis/cli/factory/serialize/XMLMessageSerializer.java
On Fri, Jul 29, 2022 at 5:31 PM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> It seems the documentation for the artemis cli tool is particularly out of
It seems the documentation for the artemis cli tool is particularly out of date.
The Data Tools page references the following subcommands of artemis data:
print
exp
imp
encode
decode
compact
recover
But only print and recover appear to exist.
The Activation Tools documentation lists:
artemis act
an you clarify?
Justin
On Fri, Jul 29, 2022 at 8:19 AM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> Hello,
>
> I have looked over the use of executeBatch in artemis-jdbc-store and it
> would be safe to use mysql’s rewriteBatchedStatements connection string
> va
Hello,
I have looked over the use of executeBatch in artemis-jdbc-store and it would
be safe to use mysql’s rewriteBatchedStatements connection string value the way
the code is written right now. I was wondering if that was a conscious decision
given rewriteBatchedStatements is not JDBC conform
, and I also updated the
website.
Out of curiosity, how is your MySQL replication configured? Are you using
the default asynchronous, semisynchronous, or fully synchronous NDB cluster?
Justin
On Tue, May 31, 2022 at 4:09 PM Stephen Baker <
stephen.ba...@rmssoftwareinc.com> wrote:
> U
Understood thank you. We (the company I work for) are definitely getting more
value out of the product than we are contributing, so that point is taken. The
JDBC replication route was recommended by a consultant from Savoir as more
established/reliable than mirroring when delivery guarantees are
Thanks,
I am a little concerned about how this reflects production readiness given
there are no schema changes mentioned in the upgrade notes and it looks to me
like it’s backwards incompatible so it couldn’t be done live before the upgrade?
If this were a production system what would the recom
We have an Artemis MQ instance in our QA environment that we are using to
evaluate JDBC persistence.
It was running with Artemis 2.20, and we recently updated to 2.22. Since then
the instance has failed to start with the following error:
2022-05-31 10:16:55,512 WARN
[org.apache.activemq.artemi
I was purging ExpiryQueues and I noticed the following warning in our Artemis
logs.
We have a dual mirror setup. It's marked as a WARN, I'm just wondering what the
implications are, or if it should be filed as a bug (if so I'm not sure how to
reproduce).
2022-04-20 11:47:16,131 WARN
[org.apa
Hello,
We’re using Artemis 2.20. We had a misbehaving application that had been
opening consumers without closing them which I recently addressed. The fix was
deployed today and since then I have been seeing a lot of the following error
(as the consumer count is very slowly trickling down)
202
> On 2022-03-23, 5:42 AM, "Dondorp, Erwin"
> wrote:
> There have been a number of fixes in this area in the recent past. Which
> version are you using?
I am using ArtemisMQ 2.20.0; these servers were running 2.17.0 until a few
weeks ago.
I have a number of related questions regarding the various message count
attributes available.
On many of my high traffic queues Delivering size, Durable delivering size, and
Durable persistent size appear to be large negative numbers like -188160082,
even while the count is zero. Is that somet
In our production system we switched from the one directional mirroring to dual
mirroring as supported by Artemis 2.20+. At the same time we renamed the mirror
to have distinct names on both datacenters (BOS and SJE), previously the
mirrors were simply called mirror on both sides.
We did that b
42 matches
Mail list logo