PeerNetworkConnector extends DiscoveryNetworkConnector so I can fire
listeners for onServiceAdd and onServiceRemove.


On 27 November 2012 14:16, Christian Posta <christian.po...@gmail.com>wrote:

> Okay good to know. I suppose this error happened just once randomly and you
> cannot reproduce?
>
> BTW... what is PeerNetworkConnector in your config:
>
>     NetworkConnector networkConnector = new
> PeerNetworkConnector(peerAddress, uri, this);
>
>
> On Tue, Nov 27, 2012 at 7:08 AM, Mark Anderson <manderso...@gmail.com
> >wrote:
>
> > The prefetch size was set on the network connector as we were getting
> > messages about slow consumers across the network bridge.
> >
> > As far as I can see the network bridge had not failed. The connector
> > entries in the log are for a client subscription that will also have the
> > topic prefetch set to 32766. I am trying to get logs from the client.
> >
> > The broker on the other end of the bridge uses the same configuration.
> >
> >
> > On 27 November 2012 13:41, Christian Posta <christian.po...@gmail.com
> > >wrote:
> >
> > > Answers to your questions:
> > >
> > > 1) Not sure yet
> > > 2) Because at the moment, send fail if no space is only triggered when
> > > producer flow control is on (at least for this case, topics)
> > > 3) like gtully said, connections could not be shutdown if they are
> > blocked
> > > somehow
> > >
> > > I noticed in your config you explicitly set the prefetch on the network
> > > connector to 32766. The default for network connectors is 1000 and the
> > > default for regular topics is Short.MAX_VALUE (which is 32767). Since
> the
> > > bridge doesn't have a prefetch buffer like normal clients do, setting
> the
> > > prefetch to 32766 could end up flooding it. Any reason why you have it
> > set
> > > to 32766?
> > >
> > > So TopicSubscriptions should always have the broker's main memory
> usage.
> > If
> > > it has the destination's memory limit, then something went wrong. Like
> > Gary
> > > said, the pending message cursor's messages would be spooled to disk
> when
> > > the main memory limit reaches its high water mark (70% by default)....
> > but
> > > that appears to not have happened in this case.
> > >
> > > Are there any indications that the TopicSubscription is for the network
> > > bridge? Or maybe that the network bridge failed somehow? I see that the
> > > dispatched count is the same as what you've set for you prefertch on
> the
> > > bridge, but if anything else can point to that it might be helpful. For
> > > example, are those port numbers on the transport connector logs for the
> > > network bridge?
> > >
> > > How is the broker on the other end of the bridge configured? Same?
> > >
> > >
> > > On Fri, Nov 23, 2012 at 8:36 AM, Mark Anderson <manderso...@gmail.com
> > > >wrote:
> > >
> > > > I have ActiveMQ 5.6.0 configured as follows:
> > > >
> > > > Producer Flow Control = false
> > > > Send Fail If No Space = true
> > > > Memory Usage Limit = 128Mb
> > > > Temp Usage Limit = 1Gb
> > > >
> > > > All my messages are non-persistent. The temp usage is configured to
> > > handle
> > > > spikes/slow consumers when processing messages.
> > > >
> > > > I continually see the following in the logs:
> > > >
> > > > WARN  Nov 20 20:55:47 (13748874 [InactivityMonitor Async Task:
> > > > java.util.concurrent.ThreadPoolExecutor$Worker@7ea0e15b[State = 0,
> > empty
> > > > queue]] org.apac
> > > > he.activemq.broker.TransportConnection.Transport) Transport
> Connection
> > > to:
> > > > tcp://192.168.2.103:35186 failed: java.net.SocketException: Broken
> > pipe
> > > > INFO  Nov 20 20:55:51 (13752162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > INFO  Nov 20 20:55:56 (13757162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > > INFO  Nov 20 20:56:01 (13762162 [ActiveMQ Transport: tcp:///
> > > > 192.168.2.103:35168] org.apache.activemq.broker.TransportConnection)
> > The
> > > > connection to 'tcp:
> > > > //192.168.2.103:35166' is taking a long time to shutdown.
> > > >
> > > > I'm not sure why the connection will never shutdown.
> > > >
> > > > I then see the following message:
> > > >
> > > > org.apache.activemq.broker.region.TopicSubscription)
> TopicSubscription:
> > > > consumer=ID:linux-5ks2-57958-1353426643811-3:1:378:1, destinations=1,
> > > > dispatched=32766, delivered=0, matched=0, discarded=0: Pending
> message
> > > > cursor
> > > >
> > > >
> > >
> >
> [org.apache.activemq.broker.region.cursors.FilePendingMessageCursor@4c41cfa2
> > > > ]
> > > > is full, temp usage (0%) or memory usage (211%) limit reached,
> blocking
> > > > message add() pending the release of resources.
> > > >
> > > > This leads me to the following questions:
> > > >
> > > > 1) Why would the memory usage be 211% while temp usage is 0%.
> > > > 2) The thread dump shows that send calls on producers are blocking.
> Why
> > > > would they not throw exceptions when send fail if no space = true?
> > > > 3) Would the issue with connection shutdown contribute to the memory
> > > usage?
> > > >
> > > > Thanks,
> > > > Mark
> > > >
> > >
> > >
> > >
> > > --
> > > *Christian Posta*
> > > http://www.christianposta.com/blog
> > > twitter: @christianposta
> > >
> >
>
>
>
> --
> *Christian Posta*
> http://www.christianposta.com/blog
> twitter: @christianposta
>

Reply via email to