[Qemu-devel] Win 2000 driver for -vga std ?

2012-02-14 Thread Reeted

Hello, subject says it all

The driver for windows 2000 for the -vga std should be the Anapa VBE 
Vesa VBEMP if I understand correctly

but I cannot on earth find this executable
http://navozhdeniye.narod.ru/vbemp.htm
all links for download all over the world are dangling!

Anybody has conserved this very important driver?

Thank you
R.



Re: [Qemu-devel] Win 2000 driver for -vga std ?

2012-02-14 Thread Reeted

On 02/14/12 07:25, Michael Tokarev wrote:

On 14.02.2012 05:42, Reeted wrote:

Hello, subject says it all

The driver for windows 2000 for the -vga std should be the Anapa VBE 
Vesa VBEMP if I understand correctly

but I cannot on earth find this executable
http://navozhdeniye.narod.ru/vbemp.htm
all links for download all over the world are dangling!

Anybody has conserved this very important driver?


This "adapter" works in all versions of windows with a built-in
vesa driver just fine, no replacement is necessary or desired.

The only problem is that some versions of windows consider that
driver to be "problematic" somehow and mark the corresponding
device with yellow exclamation sign.  Go ask M$ about this.


I don't think so...

It detects new hardware (I am virtualizing an existing machine), asks me 
where to look for a driver, I make it go looking into the Win2000 
installation CD and online at Windows Update but it says it cannot find 
a driver for such video adapter. It asks me if I want to disable the 
device or be prompted again for installation at the next boot.


And it keeps running at 16 colors (4 bit depth) 800x600 and very poor 
performances when moving windows around.




[Qemu-devel] Bug(?): Can't unset vnc password (v1.0)

2012-03-30 Thread Reeted
In qemu-kvm 1.0 (from ubuntu Precise) I don't seem to be able to UNSET 
the vnc password once it has been set via qemu monitor.
I can set it to "" which is empty password, but a VNC client connecting 
still asks to enter a password. It does log in if I enter the empty 
string, but I think it should not ask the password at all.
Or is there any other way to disable password authentication, different 
from setting an empty password string?
Note: I am using set_password from qemu-monitor (actually libvirt's 
qemu-monitor-command).


Thank you




Re: [Qemu-devel] Bug(?): Can't unset vnc password (v1.0)

2012-03-31 Thread Reeted

On 03/30/12 16:00, Daniel P. Berrange wrote:

On Fri, Mar 30, 2012 at 03:57:15PM +0200, Reeted wrote:

In qemu-kvm 1.0 (from ubuntu Precise) I don't seem to be able to
UNSET the vnc password once it has been set via qemu monitor.
I can set it to "" which is empty password, but a VNC client
connecting still asks to enter a password.

QEMU must *not* change the VNC authentication method as a side effect of
setting a password.


Well, then there is the opposite bug:

If you start qemu with no vnc password set (client enters without 
authentication), then you set a vnc password via qemu-monitor, now the 
client needs password authentication. Can you confirm this is not the 
intended behaviour?





[Qemu-devel] No more win2k virtio-net drivers in latest Fedora

2012-03-31 Thread Reeted

Hello all,
I have noticed that the virtio-net drivers for win2k exist in the 3 
years' old:


http://sourceforge.net/projects/kvm/files/kvm-driver-disc/20081229/NETKVM-20081229.iso/download

but not in the newest:

http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-22.iso

Would you add such drivers back to fedora drivers release please,
or is that because you intentionally dropped support for win2k?

Are the 2008' drivers reasonably bug-free (i.e. no undetected data 
corruption)?


Thank you
R.



[Qemu-devel] USB 2.0 printer passthrough very slow

2012-04-03 Thread Reeted

Hello all,
I have virtualized a Windows 2000 machine, using usb-passthrough as in:

http://fossies.org/unix/privat/qemu-1.0.1.tar.gz:a/qemu-1.0.1/docs/usb2.txt

in companion controller mode, i.e. specifying "bus=ehci.0" for both 1.1 
devices and 2.0 devices.


I used "companion mode" because in my tests I had problems with two 
separate busses: attaching a 1.1 device to the emulated ehci bus was 
notifying errors of incompatible speed and would not work, while 
attaching it to the emulated 1.1 bus had another problem which I don't 
remember exactly... I think either it was not possible or qemu was 
crashing. But with companion mode it appeared to work.


In my early tests I did notice that ehci emulation was rather slow: 
reading an USB flash key sequentially was yelding 6.5MB/sec (from the 
host it was much faster like 20MB/sec afair), but I guessed that would 
be enough for an USB printer. I have not actually tested if 
non-companion mode would be faster, maybe I could have tried that... If 
you think it would be faster please tell.


After virtualizing Windows 2000 the problem became very apparent in 
printing!
My canon IP4700 now takes 3 to 4 minutes for a print job to execute and 
clear from the queue. That's almost unusable.


Note that the speed is normal once the actual printing initiates, I can 
print 10 pages in a row (in a single job) at normal speed, but it's very 
slow during the phase of submitting a new print job to the queue, and 
removing it from the queue, or submitting a "go-poweroff" command to the 
queue, or changing anything about printer state.


I am guessing that maybe when the stream of data is mainly one way, it 
performs reasonably, while things like handshake with many 
message/response it probably goes very slow.


Note that I have blacklisted usblp module in the host and noone is 
claiming the printer from the host.


Dmesg since connecting the printer: (only 3 messages)
[ 6870.292017] usb 1-3: new high speed USB device number 3 using ehci_hcd
[ 6872.676028] usb 1-3: reset high speed USB device number 3 using ehci_hcd
[ 6873.144032] usb 1-3: reset high speed USB device number 3 using ehci_hcd

Qemu version is: Ubuntu Precise's qemu-kvm 1.0+noroms-0ubuntu6

info usbhost:
  Bus 1, Addr 3, Port 3, Speed 480 Mb/s
Class 00: USB device 04a9:10d2, iP4700 series
  Auto filters:
Bus 1, Addr *, Port 5, ID *:*
Bus 1, Addr *, Port 4, ID *:*
Bus 1, Addr *, Port 3, ID *:*
Bus 4, Addr *, Port 1, ID *:*
Bus 3, Addr *, Port 2, ID *:*
Bus 3, Addr *, Port 1, ID *:*

info usb:
  Device 0.2, Port 1, Speed 12 Mb/s, Product QEMU USB Tablet
  Device 1.0, Port 1, Speed 1.5 Mb/s, Product USB Host Device
  Device 1.0, Port 2, Speed 1.5 Mb/s, Product USB Host Device
  Device 1.1, Port 3, Speed 480 Mb/s, Product iP4700 series
  Device 1.0, Port 4, Speed 1.5 Mb/s, Product USB Host Device
  Device 1.0, Port 5, Speed 1.5 Mb/s, Product USB Host Device
  Device 1.2, Port 6, Speed 12 Mb/s, Product QEMU USB Hub
  Device 1.0, Port 6.1, Speed 1.5 Mb/s, Product USB Host Device

Note I am also using usb tablet... I might remove that if you say that 
an ehci-only setup is likely to go faster.


Libvirt usb part:




value='ich9-usb-uhci1,id=uhci-1,addr=1d.0,multifunction=on,masterbus=ehci.0,firstport=0'/>


value='ich9-usb-uhci2,id=uhci-2,addr=1d.1,multifunction=on,masterbus=ehci.0,firstport=2'/>


value='ich9-usb-uhci3,id=uhci-3,addr=1d.2,multifunction=on,masterbus=ehci.0,firstport=4'/>















generated command line:
/usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 2048 -smp 
2,sockets=2,cores=1,threads=1 -name windows_cl3_v3.1 -uuid 
0779e165-b11a-6d1c-fa92-f60ec5bdd9a7 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/windows_cl3_v3.1.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2012-04-02T20:37:15 -no-shutdown -boot order=c,menu=on -drive 
file=/dev/mapper/vg1-lv_cl3_v3.1,if=none,id=drive-ide0-0-0,format=raw,cache=writethrough,aio=native 
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 
-drive 
file=/home/virtual_machines/ISO/ubuntu-11.10-desktop-amd64.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=writethrough,aio=native 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-netdev tap,fd=18,id=hostnet0 -device 
virtio-net-pci,event_idx=off,netdev=hostnet0,id=net0,mac=52:54:00:12:45:88,bus=pci.0,addr=0x3 
-chardev pty,id=charserial0 -device 
isa-serial,chardev=charserial0,id=serial0 -usb -device 
usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -device 
ich9-usb-ehci1,id=ehci,addr=1d.7,multifunction=on -device 
ich9-usb-uhci1,id=uhci-1,addr=1d.0,multifunction=on,masterbus=ehci.0,firstport=0 
-device 
ich9-usb-uhci2,id=uhci-2,addr=1d.1,multifunction=on,masterbus=ehci.0,firstport=2 
-device 
ich9-usb-uhci3,id=uhci-3,addr=1d.2,multifunction=on,masterbus=ehci.0,firstport=4 
-device usb-host,bus=ehc

Re: [Qemu-devel] USB 2.0 printer passthrough very slow

2012-04-03 Thread Reeted

On 04/03/12 15:33, Erik Rull wrote:

Hi,

please try to use the .cfg file from the docs/ directory, 


The ich9-ehci-uhci.cfg ? Yes it's the same thing I am using.
If you look at the  section I have replicated that 
file exactly.
I had access rights problem using the .cfg file, that's why I replicated 
it in the commandline, but it's the same thing really.


using this file, the USB printer speed is really good on my systems. 
Well it is still an emulation so you'll never get native speed, but it 
is far better than the USB 1.x emulation we had in 0.1x versions.
Best thing to test the approx. transfer speed: Plug in a fast usb key, 
copy some larger data chunks and measure the time - on the same port 
where you attach the printer.


I had tested that on a linux guest and it was 6.5MB/sec (dd streaming of 
the device! Not file copy) as I said, but you are right, it's better if 
I try it on the real windows guest. I won't have a chance to test it 
until the weekened though.




Printing e.g. the Windows XP test page takes not really more time than 
on a native system.


And if you send a second job it follows the first one right away or it 
takes minutes in between?


What printer?

What qemu version?

Any chance you could post here your commandline? (from ps aux )

What's the guest setting in device management, is it ACPI Uniprocessor
http://www.neowin.net/forum/uploads/post-9158-1134631266.png
or ACPI Multiprocessor or Standard PC or other?
Mine is ACPI Multiprocessor with 2vCPU. Other settings were almost twice 
slower in booting the system.


(Sorry for the many questions :-) )

Thanks for your help
R.




Re: [Qemu-devel] USB 2.0 printer passthrough very slow

2012-04-03 Thread Reeted

On 04/03/12 14:26, Reeted wrote:


My canon IP4700 now takes 3 to 4 minutes for a print job to execute 
and clear from the queue. That's almost unusable.




I am seeing new patches in qemu master branch (i.e. newer than 1.0.1), 
related to the pipelining of ehci commands by Gerd Hoffmann.


Is that going to increase speed significantly like at least 2x ?

Thank you



Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm

2012-03-06 Thread Reeted

On 03/06/12 13:59, Stefan Hajnoczi wrote:

On Mon, Mar 5, 2012 at 4:44 PM, Martin Mailand  wrote:

Am 05.03.2012 17:35, schrieb Stefan Hajnoczi:


1. Test on i7 Laptop with Cpu governor "ondemand".

  v0.14.1
  bw=63492KB/s iops=15873
  bw=63221KB/s iops=15805

  v1.0
  bw=36696KB/s iops=9173
  bw=37404KB/s iops=9350

  master
  bw=36396KB/s iops=9099
  bw=34182KB/s iops=8545

  Change the Cpu governor to "performance"
  master
  bw=81756KB/s iops=20393
  bw=81453KB/s iops=20257

Interesting finding.  Did you show the 0.14.1 results with
"performance" governor?



Hi Stefan,
all results are with "ondemand" except the one where I changed it to
"performance"

Do you want a v0.14.1 test with the governor on "performance"?

Yes, the reason why that would be interesting is because it allows us
to put the performance gain with master+"performance" into
perspective.  We could see how much of a change we get.



Me too, I would be interested in seeing 0.14.1 being tested with 
performance governor so to compare it to master with performance 
governor, to make sure that this is not a regression.


BTW, I'll take the opportunity to say that 15.8 or 20.3 k IOPS are very 
low figures compared to what I'd instinctively expect from a 
paravirtualized block driver.
There are now PCIe SSD cards that do 240 k IOPS (e.g. "OCZ RevoDrive 3 
x2 max iops") which is 12-15 times higher, for something that has to go 
through a real driver and a real PCI-express bus, and can't use 
zero-copy techniques.
The IOPS we can give to a VM is currently less than half that of a 
single SSD SATA drive (60 k IOPS or so, these days).
That's why I consider this topic of virtio-blk performances very 
important. I hope there can be improvements in this sector...


Thanks for your time
R.



Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm

2012-03-07 Thread Reeted

On 03/07/12 09:04, Stefan Hajnoczi wrote:

On Tue, Mar 6, 2012 at 10:07 PM, Reeted  wrote:

On 03/06/12 13:59, Stefan Hajnoczi wrote:

BTW, I'll take the opportunity to say that 15.8 or 20.3 k IOPS are very low
figures compared to what I'd instinctively expect from a paravirtualized
block driver.
There are now PCIe SSD cards that do 240 k IOPS (e.g. "OCZ RevoDrive 3 x2
max iops") which is 12-15 times higher, for something that has to go through
a real driver and a real PCI-express bus, and can't use zero-copy
techniques.
The IOPS we can give to a VM is currently less than half that of a single
SSD SATA drive (60 k IOPS or so, these days).
That's why I consider this topic of virtio-blk performances very important.
I hope there can be improvements in this sector...

It depends on the benchmark configuration.  virtio-blk is capable of
doing 100,000s of iops, I've seen results.  My guess is that you can
do>100,000 read iops with virtio-blk on a good machine and stock
qemu-kvm.


It's very difficult to configure, then.
I also did benchmarks in the past, and I can confirm Martin and Dongsu 
findings of about 15 k IOPS with:
qemu-kvm 0.14.1, Intel Westmere CPU, virtio-blk (kernel 2.6.38 on the 
guest, 3.0 on the host), fio, 4k random *reads* from the Host page cache 
(backend LVM device was fully in cache on the Host), writeback setting, 
cache dropped on the guest prior to benchmark (and insufficient guest 
memory to cache a significant portion of the device).
If you can teach us how to reach 100 k IOPS, I think everyone would be 
grateful :-)




[Qemu-devel] Windows boot is waiting for keypress

2012-03-11 Thread Reeted

Hello,
I am virtualizing a Windows 2000 machine (bit-by-bit copy of physical 
machine).


It apparently works fine except for one strange thing: windows 2000 
stops at the black screen (first step of boot) where it asks me if I 
want to load Windows 2000 or previous operating system.


When the machine was physical there was a short timeout in that screen, 
and then it would proceed with the default choice of windows 2000, but 
in qemu it waits forever like if I pressed a key to interrupt the timeout.


Note that there is another bootloader *before* that one (it chainloads 
the windows bootloader), and that one also has a timeout which can be 
interrupted with a keypress, but that one does not show the problem, 
i.e. goes ahead after its timeout without my need to press key.


That's a problem because I cannot really start the Windows VM from linux 
scripting as it stops at boot. Sending a keypress to a VM 
programmatically is not so easy methinks... or do you know how to do that?


This is with qemu-kvm 1.0

Thanks for any idea



[Qemu-devel] Virtual serial logging server?

2011-11-06 Thread Reeted

Dear all,
please excuse the almost-OT question,

I see various possibilities in quemu-kvm and libvirt for sending virtual 
serial port data to files, sockets, pipes, etc on the host.

In particular, the TCP socket seems interesting.

Can you suggest a server application to receive all such TCP connections 
and log serial data for many virtual machines at once?


In particular I would be interested in something with quotas, i.e. 
something that deletes old lines from the logs of a certain VM when the 
filesystem space occupied by the serial logs of such VM gets over a 
certain amount of space. So that the log space for other VMs is not 
starved in case one of them loops.


Thank you
R.



[Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net

2011-10-04 Thread Reeted

Hello all,
for people in qemu-devel list, you might want to have a look at the 
previous thread about this topic, at

http://www.spinics.net/lists/kvm/msg61537.html
but I will try to recap here.

I found that virtual machines in my host booted 2x slower (on average 
it's 2x slower, but probably some parts are at least 3x slower) under 
libvirt compared to manual qemu-kvm launch. With the help of Daniel I 
narrowed it down to the vhost_net presence (default active when launched 
by libvirt) i.e. with vhost_net, boot process is *UNIFORMLY* 2x slower.


The problem is still reproducible on my systems but these are going to 
go to production soon and I am quite busy, I might not have many more 
days for testing left. Might be just next saturday and sunday for 
testing this problem, so if you can write here some of your suggestions 
by saturday that would be most appreciated.



I have performed some benchmarks now, which I hadn't performed in the 
old thread:


openssl speed -multi 2 rsa : (cpu benchmark) show no performance 
difference with or without vhost_net

disk benchmarks : show no performance difference with or without vhost_net
the disk benchmarks were: (both with cache=none and cache=writeback)
dd streaming read
dd streaming write
fio 4k random read in all cases of cache=none, cache=writeback with host 
cache dropped before test, cache=writeback with all fio data in host 
cache (measures context switch)

fio 4k random write

So I couldn't reproduce the problem with any benchmark that came to my mind.

But in the boot process this is very visible.
I'll continue the description below
before that, here are the System Specifications:
---
Host is with kernel 3.0.3 and Qemu-KVM 0.14.1, both vanilla and compiled 
by me.
Libvirt is the version in Ubuntu 11.04 Natty which is 0.8.8-1ubuntu6.5 . 
I didn't recompile this one


VM disks are LVs of LVM on MD raid array.
The problem shows identically on both cache=none and cache=writeback. 
Aio native.


Physical CPUs are: dual westmere 6-core (12 cores total, + hyperthreading)
2 vCPUs per VM.
All VMs are idle or off except the VM being tested.

Guests are:
- multiple Ubuntu 11.04 Natty 64bit with their 2.6.38-8-virtual kernel: 
very-minimal Ubuntu installs with deboostrap (not from the Ubuntu installer)
- one Fedora Core 6 32bit with a 32bit 2.6.38-8-virtual kernel + initrd 
both taken from Ubuntu Natty 32bit (so I could have virtio). Standard 
install (except kernel replaced afterwards).

Always static IP address in all guests
---

All types of guests show this problem, but it is more visible in the FC6 
guest because the boot process is MUCH longer than in the 
debootstrap-installed ubuntus.


Please note that most of boot process, at least from a certain point 
onwards, appears to the eye uniformly 2x or 3x slower under vhost_net, 
and by boot process I mean, roughly, copying by hand from some screenshots:



Loading default keymap
Setting hostname
Setting up LVM - no volume groups found
checking ilesystems... clean ...
remounting root filesystem in read-write mode
mounting local filesystems
enabling local filesystems quotas
enabling /etc/fstab swaps
INIT entering runlevel 3
entering non-interactive startup
Starting sysstat: calling the system activity data collector (sadc)
Starting background readahead

** starting from here it is everything, or almost everything, 
much slower


Checking for hardware changes
Bringing up loopback interface
Bringing up interface eth0
starting system logger
starting kernel logger
starting irqbalance
starting potmap
starting nfs statd
starting rpc idmapd
starting system message bus
mounting other filesystems
starting PC/SC smart card daemon (pcscd)
starint hidd ... can't open HIDP control socket : address familiy not 
supported by protocol (this is an error due to backporting a new ubuntu 
kernel to FC6)

starting autofs: loading autofs4
starting automount
starting acpi daemon
starting hpiod
starting hpssd
starting cups
starting sshd
starting ntpd
starting sendmail
starting sm-client
startingg console mouse services
starting crond
starting xfs
starting anacron
starting atd
starting youm-updatesd
starting Avahi daemon
starting HAL daemon


From the point I marked, onwards, most are services, i.e. daemons 
listening from sockets, so I have thought that maybe the binding to a 
socket could have been slower under vhost_net, but trying to put nc in 
listening with: "nc -l 15000" is instantaneous, so I am not sure.


The shutdown of FC6 with basically the same services as above which tear 
down, is *also* much slower on vhost_net.


Thanks for any suggestions
R.



Re: [Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net

2011-10-09 Thread Reeted

On 10/05/11 01:12, Reeted wrote:

.
I found that virtual machines in my host booted 2x slower ... to the 
vhost_net presence

...


Just a small update,

Firstly: I cannot reproduce any slowness after boot by doing:

# time /etc/init.d/chrony restart
Restarting time daemon: Starting /usr/sbin/chronyd...
chronyd is running and online.
real0m3.022s
user0m0.000s
sys 0m0.000s

since this is a network service I expected it to show the problem, but 
it doesn't. It takes exactly same time with and without vhost_net.



Secondly, vhost_net appears to work correctly, because I have performed 
a NPtcp performance test between two guests in the same host, and these 
are the results:


vhost_net deactivated for both hosts:

NPtcp -h 192.168.7.81
Send and receive buffers are 16384 and 87380 bytes
(A bug in Linux doubles the requested buffer sizes)
Now starting the main loop
0:   1 bytes917 times -->  0.08 Mbps in  92.07 usec
1:   2 bytes   1086 times -->  0.18 Mbps in  86.04 usec
2:   3 bytes   1162 times -->  0.27 Mbps in  85.08 usec
3:   4 bytes783 times -->  0.36 Mbps in  85.34 usec
4:   6 bytes878 times -->  0.54 Mbps in  85.42 usec
5:   8 bytes585 times -->  0.72 Mbps in  85.31 usec
6:  12 bytes732 times -->  1.07 Mbps in  85.52 usec
7:  13 bytes487 times -->  1.16 Mbps in  85.52 usec
8:  16 bytes539 times -->  1.43 Mbps in  85.26 usec
9:  19 bytes659 times -->  1.70 Mbps in  85.43 usec
10:  21 bytes739 times -->  1.77 Mbps in  90.71 usec
11:  24 bytes734 times -->  2.13 Mbps in  86.13 usec
12:  27 bytes822 times -->  2.22 Mbps in  92.80 usec
13:  29 bytes478 times -->  2.35 Mbps in  94.02 usec
14:  32 bytes513 times -->  2.60 Mbps in  93.75 usec
15:  35 bytes566 times -->  3.15 Mbps in  84.77 usec
16:  45 bytes674 times -->  4.01 Mbps in  85.56 usec
17:  48 bytes779 times -->  4.32 Mbps in  84.70 usec
18:  51 bytes811 times -->  4.61 Mbps in  84.32 usec
19:  61 bytes465 times -->  5.08 Mbps in  91.57 usec
20:  64 bytes537 times -->  5.22 Mbps in  93.46 usec
21:  67 bytes551 times -->  5.73 Mbps in  89.20 usec
22:  93 bytes602 times -->  8.28 Mbps in  85.73 usec
23:  96 bytes777 times -->  8.45 Mbps in  86.70 usec
24:  99 bytes780 times -->  8.71 Mbps in  86.72 usec
25: 125 bytes419 times --> 11.06 Mbps in  86.25 usec
26: 128 bytes575 times --> 11.38 Mbps in  85.80 usec
27: 131 bytes591 times --> 11.60 Mbps in  86.17 usec
28: 189 bytes602 times --> 16.55 Mbps in  87.14 usec
29: 192 bytes765 times --> 16.80 Mbps in  87.19 usec
30: 195 bytes770 times --> 17.11 Mbps in  86.94 usec
31: 253 bytes401 times --> 22.04 Mbps in  87.59 usec
32: 256 bytes568 times --> 22.64 Mbps in  86.25 usec
33: 259 bytes584 times --> 22.68 Mbps in  87.12 usec
34: 381 bytes585 times --> 33.19 Mbps in  87.58 usec
35: 384 bytes761 times --> 33.54 Mbps in  87.36 usec
36: 387 bytes766 times --> 33.91 Mbps in  87.08 usec
37: 509 bytes391 times --> 44.23 Mbps in  87.80 usec
38: 512 bytes568 times --> 44.70 Mbps in  87.39 usec
39: 515 bytes574 times --> 45.21 Mbps in  86.90 usec
40: 765 bytes580 times --> 66.05 Mbps in  88.36 usec
41: 768 bytes754 times --> 66.73 Mbps in  87.81 usec
42: 771 bytes760 times --> 67.02 Mbps in  87.77 usec
43:1021 bytes384 times --> 88.04 Mbps in  88.48 usec
44:1024 bytes564 times --> 88.30 Mbps in  88.48 usec
45:1027 bytes566 times --> 88.63 Mbps in  88.40 usec
46:1533 bytes568 times --> 71.75 Mbps in 163.00 usec
47:1536 bytes408 times --> 72.11 Mbps in 162.51 usec
48:1539 bytes410 times --> 71.71 Mbps in 163.75 usec
49:2045 bytes204 times --> 95.40 Mbps in 163.55 usec
50:2048 bytes305 times --> 95.26 Mbps in 164.02 usec
51:2051 bytes305 times --> 95.33 Mbps in 164.14 usec
52:3069 bytes305 times -->141.16 Mbps in 165.87 usec
53:3072 bytes401 times -->142.19 Mbps in 164.83