That's right Ted. You know about our issues.
This is a thing that will help all qpidd users, and also could be
'platform-independent'
on top of a high-performance disk I/O layer (like Tcp I/O layer).
On Mon, Jul 23, 2012 at 6:13 PM, Ted Ross wrote:
> On 07/22/2012 06:33 PM, Virgilio Fornazin wr
On 07/22/2012 06:33 PM, Virgilio Fornazin wrote:
We use MRG-M here too, and we are running in trouble sometimes with this
confuse flow-to-disk implementation.
What we expect to have, to replace it, it's something like a real
'queue-on-disk' with parameters like current
implementation of flow-to-
Hi Carl,
I definitely do not see any problem in sacrificing the features like
LVQ. I'm not so sure about browsing ... do we need to disable browsing
to have the real flow to disk queue? If yes, what about multiple
consumers connected to the same queue or acknowledging the messages
out of order?
R
On 07/23/2012 07:40 AM, Jakub Scholz wrote:
> Yes, the use of flow-to-disk queues unfortunately doesn't solve the
> memory issue on 100%. It just decreases the memory consumption, so the
> point when the broker runs out of memory is postponed a bit.
>
>
we actually need 'real' flow to disk queue h
On Mon, 2012-07-23 at 08:35 -0700, ParkiratBagga wrote:
> Thanks for the prompt responses. :)
>
> *On further looking into the issue. Following is my observations:*
>
> 1. When initially this exception came, it gave *RHM_IORES_EMPTY* when it
> should have given *RHM_IORES_ENQCAPTHRESH*.
This occ
Jakub,
I see the difficulties. Looks like it might not be that simple.
Regards,
Rajith
On Mon, Jul 23, 2012 at 11:35 AM, Jakub Scholz wrote:
> Hi Rajith,
>
> Most of the messages are delivered as a broadcasts from one producer
> to multiple receivers. And even when the queue of some receivers
Thanks for the prompt responses. :)
*On further looking into the issue. Following is my observations:*
1. When initially this exception came, it gave *RHM_IORES_EMPTY* when it
should have given *RHM_IORES_ENQCAPTHRESH*.
2. There were only 4 messages in the Queue which were not consumed in-spite
Hi Rajith,
Most of the messages are delivered as a broadcasts from one producer
to multiple receivers. And even when the queue of some receivers is
full because they are not consuming, we still need to deliver the
message for the rest of the receivers and at the same time to be aware
who didn't re
On 07/23/2012 01:58 PM, Toralf Lund wrote:
On 23/07/12 10:42, Gordon Sim wrote:
On 07/23/2012 08:27 AM, Toralf Lund wrote:
Hi.
In my C++ messaging client, I'm reading data from a last-value queue via
a loop that's essentially as follows:
while(1) {
try
session.sync();
qpid::
Jakub,
I wonder if producer flow control can help here.
If implemented properly this should (at least theoretically) prevent
the broker from going out of memory due to queue growth.
As you correctly point out, flow-2-disk just postpones it at best, in
addition to the fact that it has a serious imp
On 23/07/12 10:42, Gordon Sim wrote:
On 07/23/2012 08:27 AM, Toralf Lund wrote:
Hi.
In my C++ messaging client, I'm reading data from a last-value queue via
a loop that's essentially as follows:
while(1) {
try
session.sync();
qpid::messaging::Receiver receiver=session.nextRec
On Mon, 2012-07-23 at 12:25 +0100, Gordon Sim wrote:
> On 07/23/2012 11:52 AM, ParkiratBagga wrote:
> > Hi,
> >
> > I am running a performance test on Qpid with C++ broker 0.12 version and
> > Java Client - 0.12 version libraries.
> >
> > I am using 1 queue and we pass persistent messages at 4-5 Me
hi all,
i get over it after adding sasl EXTERNAL support,
i using this sasl configure file
/etc/sasl2/qpidd.conf
change mech_list option to
mech_list:PLAIN EXTERNAL
the qpid-route command is as following:
qpid-route --client-sasl-mechanism=PLAIN -t ssl queue add
qpidd/qpidd@localhost:5807 127.0.0.
Yes, the use of flow-to-disk queues unfortunately doesn't solve the
memory issue on 100%. It just decreases the memory consumption, so the
point when the broker runs out of memory is postponed a bit.
Regards
Jakub
On Mon, Jul 23, 2012 at 11:09 AM, Gordon Sim wrote:
> On 07/22/2012 09:31 PM, Jaku
On 07/23/2012 11:52 AM, ParkiratBagga wrote:
Hi,
I am running a performance test on Qpid with C++ broker 0.12 version and
Java Client - 0.12 version libraries.
I am using 1 queue and we pass persistent messages at 4-5 Message/Sec with
each message size of 100kbs. Also we are using flow to disk
Can somebody please tell, if it is a bug and how to handle this.
--
View this message in context:
http://qpid.2158936.n2.nabble.com/Qpid-Journal-Error-RHM-IORES-EMPTY-tp7580022p7580023.html
Sent from the Apache Qpid users mailing list archive at Nabble.com.
Hi,
I am running a performance test on Qpid with C++ broker 0.12 version and
Java Client - 0.12 version libraries.
I am using 1 queue and we pass persistent messages at 4-5 Message/Sec with
each message size of 100kbs. Also we are using flow to disk and we use the
persistence store (BDB). We hav
On 07/22/2012 09:31 PM, Jakub Scholz wrote:
We expect the brokers to deliver approximately hundreds of GB of
messages per day. Under normal circumstances, most of the messages
will be consumed by the clients almost immediately, but in some
exceptional situations, they may need to be stored on the
On 07/23/2012 08:27 AM, Toralf Lund wrote:
Hi.
In my C++ messaging client, I'm reading data from a last-value queue via
a loop that's essentially as follows:
while(1) {
try
session.sync();
qpid::messaging::Receiver receiver=session.nextReceiver(timeout);
message=receiv
Hi.
In my C++ messaging client, I'm reading data from a last-value queue via
a loop that's essentially as follows:
while(1) {
try
session.sync();
qpid::messaging::Receiver receiver=session.nextReceiver(timeout);
message=receiver.fetch(qpid::messaging::Duration::IMMEDIATE)
20 matches
Mail list logo