On 13 October 2016 at 19:54, rammohan ganapavarapu <[email protected]>
wrote:

> Rob,
>
> Understood, we are doing negative testing like what will happened to broker
> when all the consumers are down but producers are pumping messages, so i
> was the in the impression that flow to disk threshold will avoid broker go
> bad because of OOM. So i have bumped up the heap and direct mem setting of
> broker and try to restart  but it was complaining with bellow error.
>
>
>
> *2016-10-13 18:28:46,157 INFO  [Housekeeping[default]]
> (q.m.q.flow_to_disk_active) - [Housekeeping[default]]
> [vh(/default)/qu(ax-q-mxgroup001)] QUE-1014 : Message flow to disk active
> :  Message memory use 13124325 kB (13gb) exceeds threshold 168659 kB
> (168mb)*
>
>
> But actual flow to disk threshold from broker is as:
>
> * "broker.flowToDiskThreshold" : "858993459", ( with is 40% of
> direct-mem(2g))*
>
> I know my message size is more than the threshold but i am trying to see
> how log message was saying 168mb.
>

So the broker takes its overall flow to disk "quota" and then divides this
up between virtual hosts, and for each virtual host divides up between the
queues on the virtual host.  This allows for some fairness when multiple
virtual hosts or multiple queues are actually representing different
applications.  Individual queues thus may start flowing to disk even though
the overall threshold has not yet been reached.


>
> So to make broker running i have enabled background recovery and it seems
> working fine but  i am curious to know how broker dump back all the
> messages from disk to memory does it dump all or does dump in batches?
>

So on recovery, and also when an individual message has flowed to disk, the
broker simply reloads individual messages into memory when it needs them in
an on-demand basis.

Hope this helps,
Rob


>
> Thanks,
> Ram
>
> On Thu, Oct 13, 2016 at 11:29 AM, Rob Godfrey <[email protected]>
> wrote:
>
> > On 13 October 2016 at 17:36, rammohan ganapavarapu <
> > [email protected]>
> > wrote:
> >
> > > Lorenz,
> > >
> > > Thank you for the link, so no matter how much heap you have you will
> hit
> > > the hard limit at some point right?, i thought flow to disk will make
> > > broker not to crash because of out of memory issue but looks like its
> not
> > > the case.
> > >
> > > In my environment we will have dynamic number of producers and
> consumers
> > so
> > > its hard to pre measure how much heap we can allocate based on number
> of
> > > connection/sessions.
> > >
> > > Ram
> > >
> > >
> > Yeah - currently there is always a hard limit based on the number of
> "queue
> > entries".  Ultimately there's a trade-off to be had with designing a
> queue
> > data structure which is high performing, vs. one which can be offloaded
> > onto disk.  This gets even more complicated for queues which are not
> strict
> > FIFO (priority queues, LVQ, etc) or where consumers have selectors.
> > Ultimately if you are storing millions of messages in your broker then
> you
> > are probably doing things wrong - we would expect people to enforce queue
> > limits and flow control rather than expect the broker to have infinite
> > capacity (and even off-loading to disk you will still run out of disk
> space
> > at some point).
> >
> > -- Rob
> >
> >
> > >
> > >
> > > On Thu, Oct 13, 2016 at 9:05 AM, Lorenz Quack <[email protected]>
> > > wrote:
> > >
> > > > Hello Ram,
> > > >
> > > > may I refer you to the relevant section of the documentation [1].
> > > > As explained there in more detail, the broker keeps a representation
> of
> > > > each message in heap even when flowing the message to disk.
> > > > Therefore the amount of JVM heap memory puts a hard limit on the
> number
> > > of
> > > > message the broker can hold.
> > > >
> > > > Kind Regards,
> > > > Lorenz
> > > >
> > > > [1] https://qpid.apache.org/releases/qpid-java-6.0.4/java-broker
> > > > /book/Java-Broker-Runtime-Memory.html
> > > >
> > > >
> > > >
> > > > On 13/10/16 16:40, rammohan ganapavarapu wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> We are doing some load test using java broker 6.0.2 by stopping all
> > > >> consumers, broker was crashed at 644359 messages. Even if i try to
> > > restart
> > > >> broker its crashing with the same oom error.
> > > >>
> > > >>   "persistentEnqueuedBytes" : 12731167222,
> > > >>      "persistentEnqueuedMessages" : 644359,
> > > >>      "queueDepthBytes" : 12731167222,
> > > >>      "queueDepthMessages" : 644359,
> > > >>      "totalDequeuedBytes" : 0,
> > > >>      "totalDequeuedMessages" : 0,
> > > >>      "totalEnqueuedBytes" : 12731167222,
> > > >>      "totalEnqueuedMessages" : 644359,
> > > >>
> > > >> JVM settings of broker: -Xmx512m -XX:MaxDirectMemorySize=1536m
> > > >>
> > > >> "broker.flowToDiskThreshold" : "644245094",
> > > >>
> > > >> So theoretically broker should flow those messages to disk after the
> > > >> threshold right then broker shouldn't have caused OOM exception
> right?
> > > do
> > > >> i
> > > >> have to do any other tuning?
> > > >>
> > > >> Thanks,
> > > >> Ram
> > > >>
> > > >>
> > > >
> > > > ------------------------------------------------------------
> ---------
> > > > To unsubscribe, e-mail: [email protected]
> > > > For additional commands, e-mail: [email protected]
> > > >
> > > >
> > >
> >
>

Reply via email to