Hi Florin,
>From my logs it seems that TSO is not on even when using the native driver,
>logs attached below. I'm going to do a deeper dive into the various networking
>layers involved in this setup, will post any interesting findings back on this
>thread.
Thank you for all the help so far!
R
Hi Dom,
From the logs it looks like TSO is not on. I wonder if the vhost nic actually
honors the “tso on” flag. Have you also tried with native vhost driver, instead
of the dpdk one? I’ve never tried with the tcp, so I don’t know if it properly
advertises the fact that it supports TSO.
Lower
Hi,
I rebuilt VPP on master and updated startup.conf to enable tso as follows:
dpdk {
dev :00:03.0{
num-rx-desc 2048
num-tx-desc 2048
tso on
}
uio-driver vfio-pci
enable-tcp-udp-checksum
}
I'm not sure whether it is working or not, there is nothing in show session
verbose 2 to indicate whethe
Hi Dom,
> On Dec 12, 2019, at 12:29 PM, dch...@akouto.com wrote:
>
> Hi Florin,
>
> The saga continues, a little progress and more questions. In order to reduce
> the variables, I am now only using VPP on one of the VMs: iperf3 server is
> running on a VM with native Linux networking, and ip
Hi Florin,
The saga continues, a little progress and more questions. In order to reduce
the variables, I am now only using VPP on one of the VMs: iperf3 server is
running on a VM with native Linux networking, and iperf3+VCL client running on
the second VM.
I've pasted the output from a few com
Hi Dom,
Great to see progress! More inline.
> On Dec 6, 2019, at 10:21 AM, dch...@akouto.com wrote:
>
> Hi Florin,
>
> Some progress, at least with the built-in echo app, thank you for all the
> suggestions so far! By adjusting the fifo-size and testing in half-duplex I
> was able to get cl
Hi Florin,
Some progress, at least with the built-in echo app, thank you for all the
suggestions so far! By adjusting the fifo-size and testing in half-duplex I was
able to get close to 5 Gbps between the two openstack instances using the
built-in test echo app:
vpp# test echo clients gbytes 1
Hi Dom,
I would actually recommend testing with iperf because it should not be slower
than the builtin echo server/client apps. Remember to add fifo-size to your
echo apps cli commands (something like fifo-size 4096 for 4MB) to increase the
fifo sizes.
Also note that you’re trying full-duplex
Hi Dom,
I suspect your client/server are really bursty in sending/receiving and your
fifos are relatively small. So probably the delay in issuing the cli in the two
vms is enough for the receiver to drain its rx fifo. Also, whenever the rx fifo
on the receiver fills, the sender will most proba
Hi Florin,
Those are tcp echo results. Note that the "show session verbose 2" command was
issued while there was still traffic being sent. Interesting that on the client
(sender) side the tx fifo is full (cursize 65534 nitems 65534) and on the
server (receiver) side the rx fifo is empty (cursiz
Hi Dom,
[traveling so a quick reply]
For some reason, your rx/tx fifos (see nitems), and implicitly the snd and rcv
wnd, are 64kB in your logs lower. Is this the tcp echo or iperf result?
Regards,
Florin
> On Dec 4, 2019, at 7:29 AM, dch...@akouto.com wrote:
>
> Hi,
>
> Thank you Florin and
It turns out I was using DPDK virtio, with help from Moshin I changed the
configuration and tried to repeat the tests using VPP native virtio, results
are similar but there are some interesting new observations, sharing them here
in case they are useful to others or trigger any ideas.
After con
Are you using VPP native virtio or DPDK virtio ?
Jerome
De : au nom de "dch...@akouto.com"
Date : mercredi 4 décembre 2019 à 16:29
À : "vpp-dev@lists.fd.io"
Objet : Re: [vpp-dev] VPP / tcp_echo performance
Hi,
Thank you Florin and Jerome for your time, very much apprec
Hi,
Thank you Florin and Jerome for your time, very much appreciated.
* For VCL configuration, FIFO sizes are 16 MB
* "show session verbose 2" does not indicate any retransmissions. Here are the
numbers during a test run where approx. 9 GB were transferred (the difference
in values between clie
io"
Objet : Re: [vpp-dev] VPP / tcp_echo performance
Hi Dom,
I’ve never tried to run the stack in a VM, so not sure about the expected
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL
configuration).
- I don’t think y
Hi Dom,
I’ve never tried to run the stack in a VM, so not sure about the expected
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL
configuration).
- I don’t think you need to configure more than 16k buffers/numa.
Addition
Hi all,
I've been running some performance tests and not quite getting the results I
was hoping for, and have a couple of related questions I was hoping someone
could provide some tips with. For context, here's a summary of the results of
TCP tests I've run on two VMs (CentOS 7 OpenStack instan
17 matches
Mail list logo