Hey guys,
Can we in function bond_mode_8023ad_activate_slave() try to add to the
slave bond and LACP multicast MACs first? And then we would fall back
into promiscuous mode if the adding has failed.
In other words:
if (rte_eth_dev_mac_addr_add(slave_id, bond_mac) != 0
|| rte_eth_dev_mac_addr_
frames.
Signed-off-by: Andriy Berestovskyy
---
lib/librte_ether/rte_ethdev.c | 20 +---
lib/librte_ether/rte_ethdev.h | 6 +-
2 files changed, 10 insertions(+), 16 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index eb0a94a..f560051
frames.
Signed-off-by: Andriy Berestovskyy
---
Notes:
v2 changes:
- reword the commit title according to the check-git-log.sh
lib/librte_ether/rte_ethdev.c | 20 +---
lib/librte_ether/rte_ethdev.h | 6 +-
2 files changed, 10 insertions(+), 16 deletions(-)
diff --git
Hey Qiming,
On 27.03.2017 08:15, Yang, Qiming wrote:
I don't think this is a bug. Return errors when configure an invalid
max_rx_pkt_len is suitable for this generic API.
It is not a bug, it is an inconsistency. At the moment we can set
max_rx_pkt_len for normal frames and if it is out of ra
Some platforms do not have core/socket info in /proc/cpuinfo.
Signed-off-by: Andriy Berestovskyy
---
usertools/cpu_layout.py | 53 +
1 file changed, 23 insertions(+), 30 deletions(-)
diff --git a/usertools/cpu_layout.py b/usertools/cpu_layout.py
Some PMDs (mostly VFs) do not provide link up/down functionality.
Signed-off-by: Andriy Berestovskyy
---
examples/ip_pipeline/init.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
index 1dc2a04..be148fc 100644
At the moment ip_pipeline example uses 32 during the initialization,
which leads to an error on systems with more than 32 CPUs.
Signed-off-by: Andriy Berestovskyy
---
examples/ip_pipeline/init.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/examples/ip_pipeline/init.c b
The code should return the actual number of packets read.
Fixes: 5a99f208 ("port: support file descriptor")
Signed-off-by: Andriy Berestovskyy
---
lib/librte_port/rte_port_fd.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/librte_port/rte_port_
Makes code a bit cleaner and type-aware.
Signed-off-by: Andriy Berestovskyy
---
lib/librte_port/rte_port_fd.c | 7 +--
lib/librte_port/rte_port_source_sink.c | 7 +--
2 files changed, 2 insertions(+), 12 deletions(-)
diff --git a/lib/librte_port/rte_port_fd.c b/lib/librte_port
Signed-off-by: Andriy Berestovskyy
---
lib/librte_port/rte_port_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_port/rte_port_ethdev.c
b/lib/librte_port/rte_port_ethdev.c
index 5aaa8f7..6862849 100644
--- a/lib/librte_port/rte_port_ethdev.c
+++ b/lib
Some applications and DPDK examples expect link up/down
functionality to be provided.
Signed-off-by: Andriy Berestovskyy
---
drivers/net/thunderx/nicvf_ethdev.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/drivers/net/thunderx/nicvf_ethdev.c
b/drivers/net/thunderx
Some DPDK applications/examples check link status on their
start. NICVF does not wait for the link, so those apps fail.
Wait up to 9 seconds for the link as other PMDs do in order
to fix those apps/examples.
Signed-off-by: Andriy Berestovskyy
---
drivers/net/thunderx/nicvf_ethdev.c | 21
Signed-off-by: Andriy Berestovskyy
---
lib/librte_mempool/rte_mempool.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 991feaa..898f443 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib
{IPv4(192, 168, 1, 1), 32, 2}
>
> };
>
> send the flow with dst IP:
>
> 192.168.1.2
>
> It should check the second layer table. But the performance is still 10G.
> Does any part go wrong with my setup? Or it really can achieve 10G with 64
> byte packet size.
>
> Thanks,
>
>
--
Andriy Berestovskyy
>
>
>
>
>
> At 2016-09-20 17:41:13, "Andriy Berestovskyy" wrote:
>>Hey,
>>You are correct. The LPM might need just one (TBL24) or two memory
>>reads (TBL24 + TBL8). The performance also drops once you have a
>>variety of destination addresses instea
Hi Thomas,
On 06.04.2017 22:48, Thomas Monjalon wrote:
Anyway, why not fixing it in the reverse way: returning error for
out of range of non-jumbo frames?
I guess we need to fix most of the examples then, since most of them
just pass 0 for normal frames. And there is no default for jumbo fram
On 07.04.2017 10:34, Thomas Monjalon wrote:
We can set the right default value if the app input is 0,
as a special case.
For any other value, we must try to set it or return an error.
Right, I will resend the patch.
Andriy
.
Signed-off-by: Andriy Berestovskyy
---
Notes:
v3 changes:
- use a default only if max_rx_pkt_len is zero
v2 changes:
- reword the commit title according to the check-git-log.sh
lib/librte_ether/rte_ethdev.c | 23 ---
lib/librte_ether/rte_ethdev.h | 2
Hey Bruce,
On 07.04.2017 14:29, Bruce Richardson wrote:
Is this entirely hidden from drivers? As I said previously, I believe
NICs using ixgbe/i40e etc. only use the frame size value when the jumbo
frame flag is set. That may lead to further inconsistent behaviour unless
all NICs are set up to b
Hey Thomas,
On 07.04.2017 16:47, Thomas Monjalon wrote:
What if we add to the max_rx_pkt_len description: "the effective maximum
RX frame size depends on PMD, please refer the PMD guide for the details"?
I think the problem is not in the documentation but in the implementations
which should be
Some PMDs do not support 9,5K jumbo frames, so the example fails.
Limit the frame size to the maximum supported by the underlying NIC.
Signed-off-by: Andriy Berestovskyy
---
examples/ip_fragmentation/main.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/examples
Some PMDs do not support 9,5K jumbo frames, so the example fails.
Limit the frame size to the maximum supported by the underlying NIC.
Signed-off-by: Andriy Berestovskyy
---
examples/ip_reassembly/main.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/examples
Some PMDs do not support 9,5K jumbo frames, so the example fails.
Limit the frame size to the maximum supported by the underlying NIC.
Signed-off-by: Andriy Berestovskyy
---
examples/ipv4_multicast/main.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/examples
Hey Thomas,
On 21.04.2017 00:25, Thomas Monjalon wrote:
The hardware is different, there is not much we can do about it.
We can return an error if the max_rx_pkt_len cannot be set in the NIC.
Yes, we pass the value to the PMD, which might check the value and
return an error.
>> Neverthele
Hi,
On 25.04.2017 10:48, Thomas Monjalon wrote:
Do you think it is really a good idea to keep and maintain this script
in DPDK? It was intentionnally not exported in "make install".
I think it is a bit out of scope, and I wonder which alternatives
do we have? I know hwloc/lstopo, but there are p
Port ID is not an index from 0 to n_nic_ports, but rather a value
of nic_ports array.
Signed-off-by: Andriy Berestovskyy
---
examples/load_balancer/runtime.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer
@@ -418,9 +418,11 @@ app_lcore_io_tx(
>> static inline void
>> app_lcore_io_tx_flush(struct app_lcore_params_io *lp)
>> {
>> + uint8_t i;
>> uint8_t port;
>>
>> - for (port = 0; port < lp->tx.n_nic_ports; port ++) {
>> + port = lp->tx.nic_ports[0];
>> + for (i = 0; i < lp->tx.n_nic_ports; i ++) {
>> uint32_t n_pkts;
>>
>> if (likely((lp->tx.mbuf_out_flush[port] == 0) ||
>>
>
>
--
Andriy Berestovskyy
Works fine on ThunderX and does not brake Intel either.
Reviewed-by: Andriy Berestovskyy
Tested-by: Andriy Berestovskyy
Andriy
On 28.04.2017 13:58, Thomas Monjalon wrote:
Andriy, please would you like to review this patch?
28/04/2017 12:34, Gowrishankar:
From: Gowrishankar Muthukrishnan
e_eth_devices[internals->port_id].data->nb_tx_queues;
> +
> + slave_details->nb_rx_queues =
> + bond_nb_rx_queues > slave_dev_info->max_rx_queues
> + ? slave_dev_info->max_rx_queues
> + : bond_nb_rx_queues;
> + slave_details->nb_tx_queues =
> + bond_nb_tx_queues > slave_dev_info->max_tx_queues
> + ? slave_dev_info->max_tx_queues
> + : bond_nb_tx_queues;
> +
> /* If slave device doesn't support interrupts then we need to enabled
> * polling to monitor link status */
> if (!(slave_eth_dev->data->dev_flags & RTE_PCI_DRV_INTR_LSC)) {
> diff --git a/drivers/net/bonding/rte_eth_bond_private.h
> b/drivers/net/bonding/rte_eth_bond_private.h
> index 6c47a29..02f6de1 100644
> --- a/drivers/net/bonding/rte_eth_bond_private.h
> +++ b/drivers/net/bonding/rte_eth_bond_private.h
> @@ -101,6 +101,8 @@ struct bond_slave_details {
> uint8_t link_status_poll_enabled;
> uint8_t link_status_wait_to_complete;
> uint8_t last_link_status;
> + uint16_t nb_rx_queues;
> + uint16_t nb_tx_queues;
> /**< Port Id of slave eth_dev */
> struct ether_addr persisted_mac_addr;
>
> @@ -240,7 +242,8 @@ slave_remove(struct bond_dev_private *internals,
>
> void
> slave_add(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev);
> + struct rte_eth_dev *slave_eth_dev,
> + const struct rte_eth_dev_info *slave_dev_info);
>
> uint16_t
> xmit_l2_hash(const struct rte_mbuf *buf, uint8_t slave_count);
> --
> 2.1.4
>
--
Andriy Berestovskyy
The following messages might appear after some idle time:
"PMD: Failed to allocate LACP packet from pool"
The fix ensures the mempool size is greater than the sum
of TX descriptors.
---
drivers/net/bonding/rte_eth_bond_8023ad.c | 24 +++-
1 file changed, 15 insertions(+), 9 de
Fragmented IPv4 packets have no TCP/UDP headers, so we hashed
random data introducing reordering of the fragments.
---
drivers/net/bonding/rte_eth_bond_pmd.c | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
b/d
On Tue, Dec 8, 2015 at 2:23 PM, Andriy Berestovskyy
wrote:
> The following messages might appear after some idle time:
> "PMD: Failed to allocate LACP packet from pool"
>
> The fix ensures the mempool size is greater than the sum
> of TX descriptors.
Signed-off-by: Andriy Berestovskyy
On Tue, Dec 8, 2015 at 3:47 PM, Andriy Berestovskyy
wrote:
> Fragmented IPv4 packets have no TCP/UDP headers, so we hashed
> random data introducing reordering of the fragments.
Signed-off-by: Andriy Berestovskyy
Hi Shahaf,
> On 23 May 2018, at 07:21, Shahaf Shuler wrote:
> I think this patch addressing just small issue in a bigger problem.
> The way I see it all application needs to specify is the max packet size it
> expects to receive, nothing else(!).
[...]
> IMO The "jumbo_frame" bit can be set b
Sure, Ferruh.
Just let me know how can I help you.
Andriy
> On 23 Jan 2019, at 19:36, Ferruh Yigit wrote:
>
>> On 5/24/2018 10:20 AM, Andriy Berestovskyy wrote:
>> Hi Shahaf,
>>
>>> On 23 May 2018, at 07:21, Shahaf Shuler wrote:
>>> I think this pat
The __rte_cache_aligned was applied to the whole array,
not the array elements. This leads to a false sharing between
the monitored cores.
Fixes: e70a61ad50ab ("keepalive: export states")
Cc: remy.hor...@intel.com
Signed-off-by: Andriy Berestovskyy
---
lib/librte_eal/common/rte_keepal
Hey Harry,
Thanks for the review.
On Fri, Jan 19, 2018 at 6:31 PM, Van Haaren, Harry
wrote:
> These changes do reduce false-sharing however is there actually a performance
> benefit? A lot of cache space will be taken up if each core requires its own
> cache line, which will reduce performance
The __rte_cache_aligned was applied to the whole array,
not the array elements. This leads to a false sharing between
the monitored cores.
Fixes: e70a61ad50ab ("keepalive: export states")
Cc: remy.hor...@intel.com
Signed-off-by: Andriy Berestovskyy
---
Notes (changelog):
g in any way, i was just curious.
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662
--
Andriy Berestovskyy
mostly for patch reviews and RFCs...
Andriy
On Thu, Aug 24, 2017 at 8:54 PM, Pragash Vijayaragavan wrote:
> Thats great, what about the hash functions.
>
> On 24 Aug 2017 10:54, "Andriy Berestovskyy" wrote:
>>
>> Hey Pragash,
>> I am not the author of the
Hey Evgeny,
Please see inline.
On Thu, Aug 31, 2017 at 9:35 AM, Evgeny Agronsky
wrote:
> I'm basicly asking because of it's poor performance under high
Well, it is not the academic cuckoo hash implementation, so the
performance is not that bad and it also utilizes cache ;)
Please have a look at
Add support for make O=OUTPUT compile time option.
Signed-off-by: Andriy Berestovskyy
---
app/Makefile | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/app/Makefile b/app/Makefile
index 9207d2b..88e8716 100644
--- a/app/Makefile
+++ b/app/Makefile
@@ -57,10 +57,10
of anyway)?
>
> Sincerely,
> Matthew.
--
Andriy Berestovskyy
>>
>> /Arnon
>
> For me, breaking stuff with a black background to gain questionably useful
> colors and/or themes seems like more overhead for cognition of the code for
> not much benefit.
>
> This is going to break the tool people who use a Linux standard framebuffer
> with no X also, isn't it?
>
> Matthew.
--
Andriy Berestovskyy
int64_t
>> > > *pu)
>> > > return -1;
>> > >
>> > > dev->features = *pu;
>> > > - if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) {
>> > > - LOG_DEBUG(VHOST_CONFIG,
>> > > - "(%"PRIu64") Mergeable RX buffers enabled\n",
>> > > - dev->device_fh);
>> > > + if (dev->features &
>> > > + ((1 << VIRTIO_NET_F_MRG_RXBUF) | (1ULL << VIRTIO_F_VERSION_1))) {
>> > > vhost_hlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
>> > > } else {
>> > > - LOG_DEBUG(VHOST_CONFIG,
>> > > - "(%"PRIu64") Mergeable RX buffers disabled\n",
>> > > - dev->device_fh);
>> > > vhost_hlen = sizeof(struct virtio_net_hdr);
>> > > }
>> > > + LOG_DEBUG(VHOST_CONFIG,
>> > > + "(%"PRIu64") Mergeable RX buffers %s, virtio 1 %s\n",
>> > > + dev->device_fh,
>> > > + (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" :
>> > > "off",
>> > > + (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" :
>> > > "off");
>> > >
>> > > for (i = 0; i < dev->virt_qp_nb; i++) {
>> > > uint16_t base_idx = i * VIRTIO_QNUM;
>> > > --
>> > > 2.1.0
--
Andriy Berestovskyy
###
>
>
> When running the exact same test with DPDK version 2.0 no ierrors are
> reported.
> Is anyone else seeing strange ierrors being reported for Intel Niantic
> cards with DPDK 2.1?
>
> Best regards,
> Martin
>
--
Andriy Berestovskyy
cussion out to me. I somehow missed it.
> Unfortunately it looks like the discussion stopped after Maryam made a
> good proposal so I will vote in on that and hopefully get things started
> again.
>
> Best regards,
> Martin
>
>
>
> On 21.10.15 17:53, Andriy Beresto
(1Gbps device):
>>
>> ERROR HwEmulDPDKPort::init() rte_eth_dev_configure: err=-22, port=0:
>> Unknown error -22
>> EAL: PCI device :03:00.0 on NUMA socket 0
>> EAL: remove driver: 8086:105e rte_em_pmd
>> EAL: PCI memory unmapped at 0x7feb4000
>> EAL: PCI memory unmapped at 0x7feb4002
>>
>> So, for those devices I want to use nb_rx_q=1...
>>
>> Thanks,
>>
>> Francesco Montorsi
>
--
Andriy Berestovskyy
Hi,
Updating to DPDK 2.1 I noticed an issue with the ixgbe stats.
In commit f6bf669b9900 "ixgbe: account more Rx errors" we add XEC
hardware counter (l3_l4_xsum_error) to the ierrors now. The issue is
the UDP packets with zero check sum are counted in XEC and now in
ierrors too.
I've tried to dis
Hi Maryam,
Please see below.
> XEC counts the Number of receive IPv4, TCP, UDP or SCTP XSUM errors
Please note than UDP checksum is optional for IPv4, but UDP packets with zero
checksum hit XEC.
> And general crc errors counts Counts the number of receive packets with CRC
> errors.
Let me exp
we have something that
> is already most of the way there.
>
> If people are going to continue to block it because it is a kernel module,
> then IMO, it's better to leave the existing support on igx / ixgbe in place
> instead of stepping backwards to zero support for ethtool.
>
>> While the code wasn't ready at the time, it was a definite improvement
>> over what
>> > we have with KNI today.
>>
--
Andriy Berestovskyy
Hey folks,
> On 28 Jul 2016, at 17:47, De Lara Guarch, Pablo intel.com> wrote:
> Fair enough. So you mean to use rte_eth_dev_attach in ethdev library and
> a similar function in cryptodev library?
There is a rte_eth_dev_get_port_by_name() which gets the port id right after
the rte_eal_vdev_init
On behalf of contributors, thank you so much all the reviewers, maintainers and
un tr?s grand merci ? Thomas for your great job, help and patience ;)
Regards,
Andriy
> On 28 Jul 2016, at 23:39, Thomas Monjalon
> wrote:
>
> Once again, a great release from the impressive DPDK community:
>h
gs
>> >>>
>> >>> CONFIG_RTE_LIBRTE_VHOST=y
>> >>> CONFIG_RTE_LIBRTE_VHOST_USER=y
>> >>> CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
>> >>>
>> >>> then I run vhost app based on documentation:
>> >>>
>> >>> ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge --socket-mem
>> >>> 3712
>> >>> -- -p 0x1 --dev-basename usvhost --vm2vm 1 --stats 9
>> >>>
>> >>> -I use this strange --socket-mem 3712 because of physical limit of
>> >>> memoryon device -with this vhost user I run two KVM machines with
>> >>> followed parameters
>> >>>
>> >>> kvm -nographic -boot c -machine pc-i440fx-1.4,accel=kvm -name vm1 -cpu
>> >>> host -smp 2 -hda /home/ubuntu/qemu/debian_squeeze2_amd64.qcow2 -m
>> >>> 1024 -mem-path /mnt/huge -mem-prealloc -chardev
>> >>> socket,id=char1,path=/home/ubuntu/dpdk/examples/vhost/usvhost
>> >>> -netdev type=vhost-user,id=hostnet1,chardev=char1
>> >>> -device virtio-net
>> >>> pci,netdev=hostnet1,id=net1,csum=off,gso=off,guest_tso4=off,guest_tso6
>> >>> =
>> >>> off,guest_ecn=off
>> >>> -chardev
>> >>> socket,id=char2,path=/home/ubuntu/dpdk/examples/vhost/usvhost
>> >>> -netdev type=vhost-user,id=hostnet2,chardev=char2
>> >>> -device
>> >>> virtio-net-
>> >>> pci,netdev=hostnet2,id=net2,csum=off,gso=off,guest_tso4=off,guest_tso6
>> >>> =
>> >>> off,guest_ecn=off
>> >>>
>> >>> After running KVM virtio correctly starting (below logs from vhost app)
>> >> ...
>> >>> VHOST_CONFIG: mapped region 0 fd:31 to 0x2aaabae0 sz:0xa
>> >>> off:0x0
>> >>> VHOST_CONFIG: mapped region 1 fd:37 to 0x2aaabb00 sz:0x1000
>> >>> off:0xc
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>> >>> VHOST_CONFIG: vring kick idx:0 file:38
>> >>> VHOST_CONFIG: virtio isn't ready for processing.
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>> >>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
>> >>> VHOST_CONFIG: vring kick idx:1 file:39
>> >>> VHOST_CONFIG: virtio is now ready for processing.
>> >>> VHOST_DATA: (1) Device has been added to data core 2
>> >>>
>> >>> So everything looking good.
>> >>>
>> >>> Maybe it is something trivial but using options: --vm2vm 1 (or) 2
>> >>> --stats 9 it seems that I didn't have connection between VM2VM
>> >>> communication. I set manually IP for eth0 and eth1:
>> >>>
>> >>> on 1 VM
>> >>> ifconfig eth0 192.168.0.100 netmask 255.255.255.0 up ifconfig eth1
>> >>> 192.168.1.101 netmask 255.255.255.0 up
>> >>>
>> >>> on 2 VM
>> >>> ifconfig eth0 192.168.1.200 netmask 255.255.255.0 up ifconfig eth1
>> >>> 192.168.0.202 netmask 255.255.255.0 up
>> >>>
>> >>> I notice that in vhostapp are one directional rx/tx queue so I tryied
>> >>> to ping between VM1 to VM2 using both interfaces ping -I eth0
>> >>> 192.168.1.200 ping -I
>> >>> eth1 192.168.1.200 ping -I eth0 192.168.0.202 ping -I eth1
>> >>> 192.168.0.202
>> >>>
>> >>> on VM2 using tcpdump on both interfaces I didn't see any ICMP requests
>> >>> or traffic
>> >>>
>> >>> And I cant ping between any IP/interfaces, moreover stats show me that:
>> >>>
>> >>> Device statistics
>> >>> Statistics for device 0 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 1 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 2 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> Statistics for device 3 --
>> >>> TX total: 0
>> >>> TX dropped: 0
>> >>> TX successful: 0
>> >>> RX total: 0
>> >>> RX dropped: 0
>> >>> RX successful: 0
>> >>> ==
>> >>>
>> >>> So it seems like any packet didn't leave my VM.
>> >>> also arp table is empty on each VM.
>> >>
>> >>
>>
>>
--
Andriy Berestovskyy
SFP+ (rev 01).
>
> What is more, is there any particular reason for assuming in
> i40e_xmit_pkts that offloading checksums is unlikely (I mean the line no
> 1307 "if (unlikely(ol_flags & I40E_TX_CKSUM_OFFLOAD_MASK))" at
> dpdk-2.0.0/lib/librte_pmd_i40e/i40e_rxtx.c)?
>
> Regards,
> Angela
--
Andriy Berestovskyy
Hi Zoltan,
On Fri, May 29, 2015 at 7:00 PM, Zoltan Kiss wrote:
> The easy way is just to increase your buffer pool's size to make
> sure that doesn't happen.
Go for it!
> But there is no bulletproof way to calculate such
> a number
Yeah, there are many places for mbufs to stay :( I would try:
of-band LACP messages will not be handled with
> the expected latency and this may cause the link status to be incorrectly
> marked as down or failure to correctly negotiate with peers.
>
>
> can any one give me example or more detail info ?
>
> I am extremely grateful for it.
--
Andriy Berestovskyy
Hi Ick-Sung,
Please see inline.
On Mon, Apr 18, 2016 at 2:14 PM, ??? wrote:
> If I take an example, the worker assignment method using & (not %) in
> load balancing was not fixed yet.
If the code works, there is nothing to fix, right? ;)
> Question #1) I would like to know how can I read/writ
Hi Jay,
On Tue, Apr 19, 2016 at 10:16 PM, Jay Rolette wrote:
> Should the driver error out in that case instead of only "sort of" working?
+1, we hit the same issue. Error or log message would help.
> If I support a max frame size of 9216 bytes (exactly a 1K multiple to make
> the NIC happy), t
>> >
>> > As an app developer, I didn't realize the max frame size didn't include
>> > VLAN tags. I expected max frame size to be the size of the ethernet
>> > frame
>> > on the wire, which I would expect to include space used by any VLAN or
>> > MPLS
>> > tags.
>> >
>> > Is there anything in the docs or example apps about that? I did some
>> > digging as I was debugging this and didn't notice it, but entirely
>> > possible
>> > I just missed it.
>> >
>> >
>> > > I'm not sure there is a works-in-all-cases solution here.
>> > >
>> >
>> > Andriy's suggestion seems like it points in the right direction.
>> >
>> > From an app developer point of view, I'd expect to have a single max
>> > frame
>> > size value to track and the APIs should take care of any adjustments
>> > required internally. Maybe have rte_pktmbuf_pool_create() add the
>> > additional bytes when it calls rte_mempool_create() under the covers?
>> > Then
>> > it's nice and clean for the API without unexpected side-effects.
>> >
>>
>> It will still have unintended side-effects I think, depending on the
>> resolution
>> of the NIC buffer length paramters. For drivers like ixgbe or e1000, the
>> mempool
>> create call could potentially have to add an additional 1k to each buffer
>> just
>> to be able to store the extra eight bytes.
>
>
> The comments in the ixgbe driver say that the value programmed into SRRCTL
> must be on a 1K boundary. Based on your previous response, it sounded like
> the NIC ignores that limit for VLAN tags, hence the check for the extra 8
> bytes on the mbuf element size. Are you worried about the size resolution on
> mempool elements?
>
> Sounds like I've got to go spend some quality time in the NIC data sheets...
> Maybe I should back up and just ask the higher level question:
>
> What's the right incantation in both the dev_conf structure and in creating
> the mbuf pool to support jumbo frames of some particular size on the wire,
> with or without VLAN tags, without requiring scattered_rx support in an app?
>
> Thanks,
> Jay
--
Andriy Berestovskyy
is mailing list...
> is there any other way to search them?
>
> Thanks,
>
> Francesco Montorsi
>
>
>
--
Andriy Berestovskyy
61 matches
Mail list logo