Update:

After running nperf on same instances on the same virtual network, it looks
like all instances can get no more than 2Mb/s. Additionally, it's sporadic
and ranges from <1Mb/s, but never more than 2Mb/s:

user@localhost:~$ iperf -c 10.1.0.1 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.1.0.1, TCP port 5001
TCP window size: 86.8 KByte (default)
------------------------------------------------------------
[  5] local 10.1.0.10 port 50432 connected with 10.1.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-11.0 sec  1.25 MBytes   950 Kbits/sec
[  4] local 10.1.0.10 port 5001 connected with 10.1.0.1 port 53839
[  4]  0.0-11.1 sec  2.50 MBytes  1.89 Mbits/sec
user@localhost:~$ iperf -c 10.1.0.1 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.1.0.1, TCP port 5001
TCP window size: 50.3 KByte (default)
------------------------------------------------------------
[  5] local 10.1.0.10 port 52248 connected with 10.1.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-12.6 sec  1.25 MBytes   834 Kbits/sec
[  4] local 10.1.0.10 port 5001 connected with 10.1.0.1 port 53840
[  4]  0.0-11.9 sec  2.13 MBytes  1.49 Mbits/sec



On Fri, Aug 15, 2014 at 11:40 AM, Nick Burke <[email protected]> wrote:

>
> I upgraded from 4.0 to 4.3.0 some time ago. I didn't restart anything and
> it was all working great. However, I had to perform some maintenance and
> had to restart everything. Now, I'm seeing packet loss on all virtuals,
> even ones on the same host.
>
> sudo ping -c 500  -f 172.20.1.1
> PING 172.20.1.1 (172.20.1.1) 56(84) bytes of data.
> ........................................
> --- 172.20.1.1 ping statistics ---
> 500 packets transmitted, 460 received, 8% packet loss, time 864ms
> rtt min/avg/max/mdev = 0.069/0.218/1.290/0.139 ms, ipg/ewma 1.731/0.328 ms
>
> No interface errors reported anywhere. The host itself isn't under load at
> all. Doesn't matter if the instance uses e1000 or virtio for the drivers.
> The only thing that I'm aware of that changed was that I had to reboot all
> the physical servers.
>
>
> Could be related, but I was hit with the
>
> https://issues.apache.org/jira/browse/CLOUDSTACK-6464
>
> bug. I did follow with Marcus' suggestion:
>
>
> *"This is a shot in the dark, but there have been some issues around
> upgrades that involve the cloud.vlan table expected contents changing. New
> 4.3 installs using vlan isolation don't seem to reproduce the issue. I'll
> see if I can reproduce anything like this with basic and/or non-vlan
> isolated upgrades/installs. Can anyone experiencing an issue look at their
> database via something like "select * from cloud.vlan" and look at the
> vlan_id. If you see something like "untagged" instead of "vlan://untagged",
> please try changing it and see if that helps."*
>
> --
> Nick
>
>
>
>
>
> *'What is a human being, then?' 'A seed' 'A... seed?' 'An acorn that is
> unafraid to destroy itself in growing into a tree.' -David Zindell, A
> Requiem for Homo Sapiens*
>



-- 
Nick





*'What is a human being, then?' 'A seed' 'A... seed?' 'An acorn that is
unafraid to destroy itself in growing into a tree.' -David Zindell, A
Requiem for Homo Sapiens*

Reply via email to