the FAQ is your friend - peek at
https://activemq.apache.org/how-do-i-create-new-destinations.html
On 25 March 2014 22:29, mqadmin44 wrote:
> I am new to ActiveMQ
>
> I started up the ActiveMQ, and then I started the web console
>
> I created a queue with the web console. This worked great.
>
>
I am new to ActiveMQ
I started up the ActiveMQ, and then I started the web console
I created a queue with the web console. This worked great.
Where is the queue I created defined. Does it have an xml file describing
it. In what directory would it be, or would a queue created by the web
conso
there has been a bunch of work in that area for 5.10 so my first
suggestion is to give a 5.10-SNAPSHOT a whirl. If that shows the same
behaviour we can get on the job of sorting out what the problem is.
A fresh snapshot was minted today -
http://repository.apache.org/content/repositories/snapshots/
I read in this site:
http://activemq.apache.org/the-proxy-connector.html
That ActiveMQ has a proxy connector. Is there an ActiveMQ proxy agent, or
do I need a third party proxy. If I need a third party proxy, is there one
you would recommmend
I also noticed there is a tunneling transport, is t
Gary:
I just tried to use 5.9 with replicated levelDB and my test failed epically...
The specific problem I have is that after about 1700 messages the whole
thing slows down to a crawl. It doesn't seem to be the case if I am
using plain leveldb, but when using replication I come across problems.
Claus:
Unfortunately I don't have the energy and time to mess with snapshot
builds... I'd love to though.
The specific problem I have is that after about 1700 messages the whole
thing slows down to a crawl. It doesn't seem to be the case if I am
using plain leveldb, but when using replicatio
I just wanted to add that when I am not using replication it works swimmingly.
But I'd really like to get replication working today. It is the main
reason for the upgrade for me.
On 2014-03-25 17:15:18 +, Claus Ibsen said:
There has been numerous fixes and improves for leveldb on the 5.1
There has been numerous fixes and improves for leveldb on the 5.10
branch. So you may want to try building from latest source code and
try with a SNAPSHOT of 5.10.
On Tue, Mar 25, 2014 at 5:55 PM, Oleg Dulin wrote:
> I am running a similar test.
>
> Replicated LevelDB, 3 Zookepers, 3 AMQ brokers
I am running a similar test.
Replicated LevelDB, 3 Zookepers, 3 AMQ brokers with local_mem sync, I
am publishing messages on a queue using one thread, and taking them off
that queue on another thread.
Performance is abysmal, 1700 messages or so go out pretty quick, but
then it pauses every 5
set the cursor high water mark to 110, so it will not be reached
because pfc will kick in first.
then set and the producers will
get an exception when the memory limit is exceeded.
With this setup the default store cursor will be have like a vm cursor.
On 25 March 2014 07:39, tikboy wrote:
> Ac
leveDB also does not support multiple kahaDB instance (persistent adapters)
similar to kahaDB.
With the Mulitple KahaDB persistence adapter, destination partitioning
across journals is possible. but there is no support in levelDB for this.
Please correct me if I am wrong !!
Thanks,
Anuj
-
Hi,
Please find attached a log file and thread dump of the incident. The freeze
happen around 08:13:51.
I am using openwire to connect to the broker
failover:(tcp://localhost:2345,tcp://localhost:2345)?initialReconnectDelay=100&randomize=false
and consumer url
tcp://localhost:2345
activemq.l
Hi,
You'll need to adjust your broker url. Adding advisorySupport="false" is
not enough to completely disable advisory messages. You'll need to change
the broker url in your ConnectionFactory to: tcp://localhost:61616
?jms.watchTopicAdvisories=false
Regards,
Richard
http://richardlog.com
On Sat
Actually what I would like is when the memoryLimit for a certain queue is
reached (even if it is set to 70%), the data should not be offloaded to any
store (memory, kahadb, temp) and for the producers to know that an exception
has happened so they could handle for themselves.
What is currently hap
Hi,
Is there a support on levelDB to provide multiple persistent adapters ?
In kahaDb we have can have multiple kahaDB instance for each destinations.
With the Mulitple KahaDB persistence adapter, destination partitioning
across journals is possible.
Is there some feature in levelDB for multip
15 matches
Mail list logo