Hi Mark,

thanks for your comprehensive response!

Our tests are basically matching the linked results (we are testing with 2 
OSDs/NVMe and fio/librbd too, but having a much smaller setup). Sometimes we 
see smaller or higher improvements from Nautilus to Octupus but it is similar. 
Only the random write iops are the other way round, namely a lot slower in our 
setup …

Meanwhile we have gone through some more testing:

@1) Increasing osd_memory_target from the default (which ist 4GB as far as we 
know) to 16GB doesn't change the results.

@2/3) The CPUs are configured for high performance in the BIOS and we also 
ensured that it is set in the kernel as well (governor performance). Each node 
in our test-setup has one Intel E2690-v3 with 12/24 cores/threads running 
constantly at 3,1GHz.

@4) Yes, we have tested bluefs_buffered_io without success. We did some 
profiling using gdbpmp, collecting 100 samples shows that 0.5%-1% of the time 
is spent in io_submit. There is an extrem performance impact when profiling 
(reducing iops to several hundreds operations/second), therefore we are 
uncertain if this is a relevant information. Can we improve the profiling (we 
used gdbpmp.py -p … -n 100 -m bstore_kv_sync,bstore_kv_final -o … like in the 
example on github)? We gladly provide the sample data collected if this could 
be helpful. Furthermore we checked iostats, which seems to be okay (w_await 
most times below 1).

@5) We have set noscrub and norebalance as well as disabled the automated 
scaling of the pg count during all our tests.

As the results are reproducible when switching between Nautilus and Octopus, 
there must clearly be something going on in Octopus. Maybe this only affects 
very small setups like ours? As far as we see you have been testing with 8 
nodes/64 NVMe total, where our setup only consists of 3 nodes with one NVMe 
each.

Kind regards
Stephan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to