On 04/20/2015 02:13 PM, salma ali wrote:
> Where scheduled messages are saved by default in AMQ , in memory or
> kahaDB
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/scheduled-messages-tp4695290.html
> Sent from the ActiveMQ - User mailing list archive at
Where scheduled messages are saved by default in AMQ , in memory or
kahaDB
--
View this message in context:
http://activemq.2283324.n4.nabble.com/scheduled-messages-tp4695290.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
yes. it’s not very buggy/reliable.
What we did was to use activemq in embedded mode and used our own/internal
daemon infrastructure.
I guess the point is that activemq is WAY easier to embed than say
something like Cassandra or Elasticsearch.
So if you can easily make your own Java daemons I wo
On Mon, Apr 20, 2015 at 6:24 AM, Tim Bain wrote:
> I'm confused about what would drive the need for this.
>
> Is it the ability to hold more messages than your JVM size allows? If so,
> we already have both KahaDB and LevelDB; what does Chronicle offer that
> those other two don't?
>
>
The abili
With G1GC, you need exactly 1 free bucket per GC thread to be able to
perform garbage collection. By the time you need the next bucket on any
given thread, you've freed up at least one new one (often more), so you've
always got enough space for that next bucket. And given that G1GC shoots
(by def
I'm having a hard time getting a Debian package of ActiveMQ to upgrade.
It seems that the ActiveMQ init script is told to "stop". JMX is not
configured so this fails, falling back to sending a SIGKILL.
The trouble is that "kill" only ever returns the result of itself and does
not guarantee that t
"You made a statement that sounds like "the JVM
can only use half its memory, because the other half has to be kept free
for GCing", which doesn't match my experience at all. I've observed G1GC
to successfully GC when the heap was nearly 100% full, I'm certain it's not
a problem for CMS because CM
Here's a brief sample code:
Stomp stomp = new Stomp("localhost", 61613);
Future future = stomp.connectFuture();
FutureConnection connection = future.await();
AsciiBuffer id = connection.nextId();
StompFrame unsubscribe = new StompFrame(UNSUBSCRIBE);
unsubscribe.addHeader(ID, id);
Future unsu
I'm confused about what would drive the need for this.
Is it the ability to hold more messages than your JVM size allows? If so,
we already have both KahaDB and LevelDB; what does Chronicle offer that
those other two don't?
Is it because you see some kind of inefficiency in how ActiveMQ uses mem