What is the traffic across the WAN for an app other than ActiveMQ? Or what
is the speed of the connection when done NOT over the WAN?

WANs prioritize traffic, wonder if you're hitting a bottleneck in the WAN?

On Tue, May 19, 2015 at 3:49 AM, Leung Wang Hei <gemaspec...@yahoo.com.hk>
wrote:

> Hi all,
>
> There seems to be an invisible barrier in the socket buffer for MQ network
> bridge.  We expect increasing tcp socket buffer size would give high
> throughput but the outcome is not.  Here are the test details:
>
> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
> - One single duplex network connector is setup at broker B, statically
> includes one topic
> - 10 producers each sending 10K message.  All are AMQObjectMessage.
> - Socket buffer size set as url argument in network connector at broker B
> and transport connector at broker A
> - Use wireshark to capture link traffic
>
> Wireshark capture shows that throughput always capped at around
> 3.74Mbit/sec, the max throughput as with default 64K socket buffer.
> Attached
> the config details.
>
> I don't expect a bug in MQ, am I missing something?  Any advice would be
> greatly appreciated.
>
>
> *Broker A*
> <transportConnectors>
>              <transportConnector name="openwire"
> uri="tcp://0.0.0.0:61616?transport.socketBufferSize=10485760"/>
>              <transportConnector name="openwirelog"
> uri="tcp://0.0.0.0:61617"/>
>              <transportConnector name="stomp" uri="stomp://0.0.0.0:61613
> "/>
>          </transportConnectors>
>
> *Broker B*
>  <destinationPolicy>
>              <policyMap>
>                  <policyEntries>
>
>                      <policyEntry topic=">" producerFlowControl="false"
> advisoryForDiscardingMessages="true" advisoryForSlowConsumers="true" >
>                          <pendingSubscriberPolicy>
>                              <vmCursor />
>                          </pendingSubscriberPolicy>
>                      </policyEntry>
>                 </policyEntries>
>              </policyMap>
>          </destinationPolicy>
>
>
> <networkConnector name="nc1-hk"
> uri="static://(tcp://brokerA:61616?socketBufferSize=10485760)"
> duplex="true"
> networkTTL="2">
>              <staticallyIncludedDestinations>
>                  <topic physicalName="test"/>
>              </staticallyIncludedDestinations>
> </networkConnector>
>
>
> *Linux traffic control*
> tc qdisc add dev ens32 root handle 1: htb default 12
> tc class add dev ens32 parent 1: classid 1:1 htb rate 20Mbit ceil 20MBit
> tc qdisc add dev ens32 parent 1:1 handle 20: netem latency 140ms
> tc filter add dev ens32 protocol ip parent 1:0 prio 1 u32 match ip dst
> brokerB_Ip flowid 1:1
>
>
> Best regards,
> Leung Wang Hei
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io

Reply via email to