that's a good point... not sure what the value of flow controlling the
producer if there are no consumers to a topic... maybe preserving
retroactive consumers?


On Wed, Apr 17, 2013 at 9:18 AM, SuoNayi <suonayi2...@163.com> wrote:

> The message flow can not go there at the moment.
> In the send method it will check if the memory is full at first, if full
> then the thread will wait for space and the producer gets blocked.
> I mean before checking the memory is full or not, we can see if there are
> consumers, if none the thread just returns.
> So we can avoid blocking the producer.
>
>
> At 2013-04-18 00:08:33,"Christian Posta" <christian.po...@gmail.com>
> wrote:
> >Check the dispatch method in Topic.java... we do just that:
> >
> >            synchronized (consumers) {
> >                if (consumers.isEmpty()) {
> >                    onMessageWithNoConsumers(context, message);
> >                    return;
> >                }
> >            }
> >
> >onMessageWithNoConsumers really just sends advisories for specific cases
> >
> >
> >On Sun, Apr 7, 2013 at 7:24 AM, SuoNayi <suonayi2...@163.com> wrote:
> >
> >> Hi,we're using the virtual topic to scatter messages to 12
> >> queues/consumers.
> >>
> >> When one consumer of one queue becomes very slow since it's located in
> >> London,
> >> far from our data center in china, our single producer becomes very slow
> >> to publish messages.
> >> What phenomenon we observed is that:
> >> When the slow consumer located in London dequeues 200 messages the
> single
> >> producer can publish 200 messages or it will be blocked.
> >> It seems that PFC is working but I ensure I have disabled PFC for queues
> >> when the broker is deployed in production.
> >> But PFC for topics is enabled and I have make that disabled and given
> the
> >> broker a reboot.
> >> After that my producer can keep publishing messages regardless of the
> slow
> >> consumer but the memory usage limit keeps increasing all the time.
> >> Eventually I saw the 460+ memory usage limit reached which made me very
> >> surprised.
> >> AFAIK, the pending messages in transactions can contribute to the
> exceeded
> >> memory usage limit.
> >> Since we only have single producer and we send 100 messages in a batch
> >> using a transaction and every message size is less than 1kb,
> >> So I can not understand how the  memory usage limit exceeds so much.
> >>
> >>
> >> Thanks,
> >> SuoNayi
> >
> >
> >
> >
> >--
> >*Christian Posta*
> >http://www.christianposta.com/blog
> >twitter: @christianposta
>



-- 
*Christian Posta*
http://www.christianposta.com/blog
twitter: @christianposta

Reply via email to