On 11/09/2016 02:25 PM, Satish Patel wrote:
if i send request direct to server1 and server2 everything works!! but
if i point my client to LBaaS VIP it doesn't work.
LBaaS running on Network Node and server1 and server2 running on compute node.
So LbaaS traffic coming from Network node to compute node over VxLAN
(overlay) which has 1450 MTU and that is causing issue. If i change
that MTU to 1500 it works.
So, the question is what is wrong with LBaaS. I'm not familiar with the
workings of Octavia but in general terms, I would expect the following
to happen at the TCP level when there is a load balancer in the middle
between two end nodes:
1) Client sends TCP connection request (aka TCP SYNchronize segment) to
the Load Balancer. That TCP SYN contains a Maximum Segment Size (MSS)
option based on the client's MTU.
2) The LB sends a TCP connection response - a TCP SYN|ACK to the client.
It will have a TCP MSS option based on the LBaaS's egress interface.
3) Between the LBaaS and the client, the smaller of the two MSS options
will be used.
Meanwhile, I expect the same thing to happen between the LB and the
back-end server.
In theory then, there could be two MSSes at work here - one on the TCP
connection between the LBaaS and the client, and one between the LBaaS
and the server. And unless there is something amiss with the LBaaS, it
should be happiness and joy. Certainly that should be the case if the
LBaaS was a process with two sockets, moving data between the sockets.
I am speculating wildly, but if the "external" connection had a larger
MSS than the internal connection, and the LBaaS code somehow tried to
move a packet "directly" from one connection to another, then that could
cause problems.
If you can, you might try following the packets.
rick jones
On Wed, Nov 9, 2016 at 4:58 PM, Rick Jones <rick.jon...@hpe.com> wrote:
On 11/09/2016 08:06 AM, Satish Patel wrote:
We have 3 node cluster on Mikata openstack and we are using DVR
network. Recently we build two puppetmaster server on openstack and
put them behind Lbaasv2 load-balancer to share load but i found my
none of client able to talk to correctly with puppetmaster server on
openstack. After lots of research found openstack VM use mtu 1400 and
my rest of puppet agent server who are not on openstack they used mtu
1500.
as soon as i change puppet agent MTU size to 1400 everything started
working. But just surprised why openstack use 1400 for VM there must
be a reason like vxlan or GRE.
So for experiment i change mtu 1500 on puppetmaster server on
openstack but it didn't help. How do i fix this issue?
What happens if LBaaS isn't between the client and server?
Does Puppet do anything with UDP or is it all TCP?
rick jones
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack