On 12/03/2018 12:16 PM, Tariq Toukan wrote:


On 12/03/2018 12:08 PM, Tariq Toukan wrote:


On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote:
This patch shows how it is possible to have both the driver local page
cache, which uses elevated refcnt for "catching"/avoiding SKB
put_page.  And at the same time, have pages getting returned to the
page_pool from ndp_xdp_xmit DMA completion.

Performance is surprisingly good. Tested DMA-TX completion on ixgbe,
that calls "xdp_return_frame", which call page_pool_put_page().
Stats show DMA-TX-completion runs on CPU#9 and mlx5 RX runs on CPU#5.
(Internally page_pool uses ptr_ring, which is what gives the good
cross CPU performance).

Show adapter(s) (ixgbe2 mlx5p2) statistics (ONLY that changed!)
Ethtool(ixgbe2  ) stat:    732863573 (    732,863,573) <= tx_bytes /sec
Ethtool(ixgbe2  ) stat:    781724427 (    781,724,427) <= tx_bytes_nic /sec Ethtool(ixgbe2  ) stat:     12214393 (     12,214,393) <= tx_packets /sec Ethtool(ixgbe2  ) stat:     12214435 (     12,214,435) <= tx_pkts_nic /sec Ethtool(mlx5p2  ) stat:     12211786 (     12,211,786) <= rx3_cache_empty /sec Ethtool(mlx5p2  ) stat:     36506736 (     36,506,736) <= rx_64_bytes_phy /sec Ethtool(mlx5p2  ) stat:   2336430575 (  2,336,430,575) <= rx_bytes_phy /sec Ethtool(mlx5p2  ) stat:     12211786 (     12,211,786) <= rx_cache_empty /sec Ethtool(mlx5p2  ) stat:     22823073 (     22,823,073) <= rx_discards_phy /sec Ethtool(mlx5p2  ) stat:      1471860 (      1,471,860) <= rx_out_of_buffer /sec Ethtool(mlx5p2  ) stat:     36506715 (     36,506,715) <= rx_packets_phy /sec Ethtool(mlx5p2  ) stat:   2336542282 (  2,336,542,282) <= rx_prio0_bytes /sec Ethtool(mlx5p2  ) stat:     13683921 (     13,683,921) <= rx_prio0_packets /sec Ethtool(mlx5p2  ) stat:    821015537 (    821,015,537) <= rx_vport_unicast_bytes /sec Ethtool(mlx5p2  ) stat:     13683608 (     13,683,608) <= rx_vport_unicast_packets /sec

Before this patch: single flow performance was 6Mpps, and if I started
two flows the collective performance drop to 4Mpps, because we hit the
page allocator lock (further negative scaling occurs).

V2: Adjustments requested by Tariq
  - Changed page_pool_create return codes not return NULL, only
    ERR_PTR, as this simplifies err handling in drivers.
  - Save a branch in mlx5e_page_release
  - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ

Signed-off-by: Jesper Dangaard Brouer <bro...@redhat.com>
---

I am running perf tests with your series. I sense a drastic degradation in regular TCP flows, I'm double checking the numbers now...

Well, there's a huge performance degradation indeed, whenever the regular flows (non-XDP) use the new page pool. Cannot merge before fixing this.

If I disable the local page-cache, numbers get as low as 100's of Mbps in TCP stream tests.

It seems that the page-pool doesn't fit as a general fallback (when page in local rx cache is busy), as the refcnt is elevated/changing:

[ 7343.086102] ------------[ cut here ]------------
[ 7343.086103] __page_pool_put_page() violating page_pool invariance refcnt:0 [ 7343.086114] WARNING: CPU: 1 PID: 17 at net/core/page_pool.c:291 __page_pool_put_page+0x7c/0xa0 [ 7343.086114] Modules linked in: mlx5_core(OE) netconsole nfsv3 nfs fscache rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm mlx4_ib dm_mirror ib_core dm_region_hash dm_log dm_mod dax sb_edac x86_pkg_temp_thermal coretemp kvm ipmi_si ipmi_devintf iTCO_wdt irqbypass crc32_pclmul iTCO_vendor_support ipmi_msghandler ghash_clmulni_intel dcdbas acpi_power_meter sg wmi pcspkr lpc_ich mfd_core shpchp nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables mlx4_en sr_mod cdrom sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci drm mlx4_core libahci libata crc32c_intel megaraid_sas tg3 i2c_core [last unloaded: mlx5_core] [ 7343.086137] CPU: 1 PID: 17 Comm: ksoftirqd/1 Tainted: G W OE 4.16.0-rc4+ #7 [ 7343.086138] Hardware name: Dell Inc. PowerEdge R730/0H21J3, BIOS 1.5.4 10/002/2015
[ 7343.086139] RIP: 0010:__page_pool_put_page+0x7c/0xa0
[ 7343.086140] RSP: 0018:ffffc9000653fcc8 EFLAGS: 00010292
[ 7343.086141] RAX: 000000000000003e RBX: ffffea0080d582c0 RCX: 0000000000000000 [ 7343.086141] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000246 [ 7343.086142] RBP: ffff882033ffe000 R08: 000000000000003e R09: ffffffff8282f8b6 [ 7343.086142] R10: 00000000000050ee R11: 0000000000000001 R12: ffff881fc857c000 [ 7343.086143] R13: ffff881fc49cc800 R14: 0000000000000040 R15: ffff881fc857c140 [ 7343.086143] FS: 0000000000000000(0000) GS:ffff88203f000000(0000) knlGS:0000000000000000
[ 7343.086144] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 7343.086145] CR2: 00007f0aee0dc8b0 CR3: 000000000200a006 CR4: 00000000001606e0
[ 7343.086156] Call Trace:
[ 7343.086168]  mlx5e_free_rx_mpwqe+0x58/0x70 [mlx5_core]
[ 7343.086176]  mlx5e_handle_rx_cqe_mpwrq+0x8cf/0x9b0 [mlx5_core]
[ 7343.086183]  mlx5e_poll_rx_cq+0xb8/0x890 [mlx5_core]
[ 7343.086190]  mlx5e_napi_poll+0x88/0x640 [mlx5_core]
[ 7343.086192]  net_rx_action+0x286/0x3d0
[ 7343.086194]  __do_softirq+0xd0/0x282
[ 7343.086196]  run_ksoftirqd+0x24/0x40
[ 7343.086198]  smpboot_thread_fn+0xfe/0x150
[ 7343.086199]  kthread+0xf5/0x130
[ 7343.086200]  ? sort_range+0x20/0x20
[ 7343.086201]  ? kthread_bind+0x10/0x10
[ 7343.086203]  ret_from_fork+0x35/0x40
[ 7343.086204] Code: de 48 89 ef 5b 5d e9 84 f9 ff ff f0 ff 4e 1c 74 02 eb e8 8b 56 1c 48 c7 c7 e0 56 ef 81 48 c7 c6 b0 a9 cd 81 31 c0 e8 44 0b a2 ff <0f> 0b f6 45 08 01 75 0a 48 89 df 5b 5d e9 92 1d b5 ff 48 8d 73
[ 7343.086217] ---[ end trace af3c090ef841e41d ]---

Reply via email to