I’d love to see the bill for storing metrics and logs for 1mm topics or 
hundreds of millions of topics. 

That sounds more like a splash blog post vs a practical solution.

> On Mar 8, 2025, at 1:06 PM, Nathan Clayton <apache....@nathanclayton.com> 
> wrote:
> 
> I'm not sure about Kafka, but Apache Pulsar is currently built to support up 
> to 1mm topics. I know that there's active work on using Oxia instead of 
> Zookeeper for the metadata store with the goal to support hundreds of 
> millions of topics.
> 
> That said, millions of topics in Artemis may be a bit of an anti-pattern. 
> Depending on your use case, it may be better to use filter expressions on 
> fewer queues [1].
> 
> Nathan
> 
> [1] 
> https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html
> 
> 
> On Wed, Mar 5, 2025, at 12:45, William Crowell wrote:
>> I am wondering if Apache Kafka might be more feasible for something like 
>> this.
>> 
>> Regards,
>> 
>> William Crowell
>> 
>> From: Justin Bertram <jbert...@apache.org>
>> Date: Wednesday, March 5, 2025 at 3:26 PM
>> To: users@activemq.apache.org <users@activemq.apache.org>
>> Subject: Re: Maximum Amount of Topic/Queues Within Apache Artemis
>> That's it.
>> 
>> 
>> Justin
>> 
>> On Wed, Mar 5, 2025 at 2:19 PM William Crowell
>> <wcrow...@perforce.com.invalid> wrote:
>> 
>>> Justin,
>>> 
>>> Never, I think I found it:
>>> 
>>> “…the web console's behavior is configurable. Go to the "Preferences"
>>> (available from the menu in the top right) and click the "Jolokia" tab.
>>> Here you can turn off auto-refresh (i.e. "Update rate"). You can also
>>> decrease the amount of data fetched by lowering the "Max depth" and "Max
>>> collection size."
>>> 
>>> Regards,
>>> 
>>> William Crowell
>>> 
>>> From: William Crowell <wcrow...@perforce.com.INVALID>
>>> Date: Wednesday, March 5, 2025 at 3:12 PM
>>> To: users@activemq.apache.org <users@activemq.apache.org>
>>> Subject: Re: Maximum Amount of Topic/Queues Within Apache Artemis
>>> Justin,
>>> 
>>> Appreciate your reply.  Your insight is greatly appreciated and invaluable.
>>> 
>>> If you can provide more details on this, then I would appreciate it.  I
>>> can search the archives for it as well.  I am guessing there is some
>>> command line equivalent to represent the tree.
>>> 
>>> What we have is several devices that have JMS clients that communicate
>>> with the broker on topics and durable queues.  By “several devices” I mean
>>> maybe 100s or 1000s.
>>> 
>>> Regards,
>>> 
>>> William Crowell
>>> 
>>> From: Justin Bertram <jbert...@apache.org>
>>> Date: Wednesday, March 5, 2025 at 2:57 PM
>>> To: users@activemq.apache.org <users@activemq.apache.org>
>>> Subject: Re: Maximum Amount of Topic/Queues Within Apache Artemis
>>> The broker itself doesn't impose any arbitrary limits on the number of
>>> addresses & queues. That said, there certainly are limits, the size of your
>>> heap probably being the most important. Every address and queue carry with
>>> it some memory overhead including the objects themselves but also related
>>> objects like JMX MBeans which enable management.
>>> 
>>> The current management console will struggle by default with a huge number
>>> of addresses and/or queues due to the way the "tree" view works (i.e.
>>> refreshing the whole view on a regular basis). This is the main reason we
>>> removed this view in the new console [1]. That said, the console can be
>>> configured in such a way as to mitigate some of these problems. I can
>>> provide more details on that if necessary although it's been covered on
>>> this list a few times already.
>>> 
>>> Regarding scalability...the broker was written with scalability in mind,
>>> but everything breaks down at some point. I'm interested to hear about your
>>> experiences if you go down this route especially regarding bottlenecks that
>>> you may find for your specific use-case. Keep us in the loop!
>>> 
>>> 
>>> Justin
>>> 
>>> [1]
>>> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-5319&data=05%7C02%7CWCrowell%40perforce.com%7C320052ba6ccd49ef708808dd5c240792%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638768032021655752%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=fiCybGio5R84gDL5NiT3uHHTrz%2FtKGU5k3AappaW%2FMc%3D&reserved=0<https://issues.apache.org/jira/browse/ARTEMIS-5319>
>>> <
>>> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-5319&data=05%7C02%7CWCrowell%40perforce.com%7C320052ba6ccd49ef708808dd5c240792%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638768032021670531%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=x99X9Q4l3k5WDqM0eAAWMXIvYPR9ReXIlsBXyz3KBKo%3D&reserved=0<https://issues.apache.org/jira/browse/ARTEMIS-5319>
>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FARTEMIS-5319&data=05%7C02%7CWCrowell%40perforce.com%7C320052ba6ccd49ef708808dd5c240792%7C95b666d19a7549ab95a38969fbcdc08c%7C0%7C0%7C638768032021679546%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=dtoeTZ5yS%2BtWShgjBs6svkrqwbvLSy6Y4OPAjTTsUwQ%3D&reserved=0<https://issues.apache.org/jira/browse/ARTEMIS-5319>>
>>> 
>>> On Wed, Mar 5, 2025 at 12:33 PM William Crowell
>>> <wcrow...@perforce.com.invalid> wrote:
>>> 
>>>> Good afternoon,
>>>> 
>>>> Are there any limitations or concerns with creating millions of topics
>>> and
>>>> queues within Apache Artemis that have low volumes of messages in each
>>>> topic and queue?  I do not think there are as I believe Artemis should be
>>>> able to handle this use case.  Important question is: Is it scalable?
>>>> 
>>>> Regards and have a great day,
>>>> 
>>>> William Crowell
>>>> 
>>>> 
>>>> This e-mail may contain information that is privileged or confidential.
>>> If
>>>> you are not the intended recipient, please delete the e-mail and any
>>>> attachments and notify us immediately.
>>>> 
>>>> 
>>> 
>>> 
>>> CAUTION: This email originated from outside of the organization. Do not
>>> click on links or open attachments unless you recognize the sender and know
>>> the content is safe.
>>> 
>>> 
>>> This e-mail may contain information that is privileged or confidential. If
>>> you are not the intended recipient, please delete the e-mail and any
>>> attachments and notify us immediately.
>>> 
>>> 
>>> This e-mail may contain information that is privileged or confidential. If
>>> you are not the intended recipient, please delete the e-mail and any
>>> attachments and notify us immediately.
>>> 
>>> 
>> 
>> 
>> CAUTION: This email originated from outside of the organization. Do not 
>> click on links or open attachments unless you recognize the sender and 
>> know the content is safe.
>> 
>> 
>> This e-mail may contain information that is privileged or confidential. 
>> If you are not the intended recipient, please delete the e-mail and any 
>> attachments and notify us immediately.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
> For additional commands, e-mail: users-h...@activemq.apache.org
> For further information, visit: https://activemq.apache.org/contact
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org
For additional commands, e-mail: users-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact


Reply via email to