With that kind of degradation performance on read I would immediately think on llite’s max_read_ahead parameters on the client. Specifically these 2:
max_read_ahead_mb: total amount of MB allocated for read ahead, usually quite low for bandwidth benchmarking purposes and when there’re several files per client max_read_ahead_per_file_mb: the default is quite low for 16MB RPCs (only a few RPCs per file) You probably need to check the effect increasing both of them. Regards, Diego From: lustre-discuss <[email protected]> on behalf of Pinkesh Valdria <[email protected]> Date: Tuesday, 10 December 2019 at 09:40 To: "[email protected]" <[email protected]> Subject: [lustre-discuss] Degraded read performance with Large Bulk IO (16MB RPC) I was expecting better or same read performance with Large Bulk IO (16MB RPC), but I see degradation in performance. Do I need to tune any other parameter to benefit from Large Bulk IO? Appreciate if I can get any pointers to troubleshoot further. Throughput before - Read: 2563 MB/s - Write: 2585 MB/s Throughput after - Read: 1527 MB/s. (down by ~1025) - Write: 2859 MB/s Changes I did are: On oss - lctl set_param obdfilter.lfsbv-*.brw_size=16 On clients - unmounted and remounted - lctl set_param osc.lfsbv-OST*.max_pages_per_rpc=4096 (got auto-updated after re-mount) - lctl set_param osc.*.max_rpcs_in_flight=64 (Had to manually increase this to 64, since after re-mount, it was auto-set to 8, but read/write performance was poor) - lctl set_param osc.*.max_dirty_mb=2040. (setting the value to 2048 was failing with : Numerical result out of range error. Previously it was set to 2000 when I got good performance. My other settings: - lnetctl net add --net tcp1 --if $interface –peer-timeout 180 –peer-credits 128 –credits 1024 - echo "options ksocklnd nscheds=10 sock_timeout=100 credits=2560 peer_credits=63 enable_irq_affinity=0" > /etc/modprobe.d/ksocklnd.conf - lfs setstripe -c 1 -S 1M /mnt/mdt_bv/test1
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
