Hi Mark and others,
last week we have finally been able to solve the problem. We are using Gentoo
on our test cluster and as it turned out the official Ebuilds are not setting
CMAKE_BUILD_TYPE=RelWithDebInfo, which alone caused the performance degradation
we have been seeing after upgrading to
Based on v15.2.2, 5 storage node(nvme:OSD=1:2, optane as rocksdb backend)
5client,
test case: fio, 20image, 4K Randread/randwrite
4KRR4KRW
default 760700 262500
PR343631185500 254
In our test based v15.2.2, i found osd_numa_prefer_iface/osd_numa_auto_affinity
make onlye half cpu used. for 4K RW, it make performance drop much. So you can
check this whether occur.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe sen
On 6/11/20 11:30 AM, Stephan wrote:
Hi Mark,
thanks for your comprehensive response!
Our tests are basically matching the linked results (we are testing with 2
OSDs/NVMe and fio/librbd too, but having a much smaller setup). Sometimes we
see smaller or higher improvements from Nautilus to Oct
Hi Mark,
thanks for your comprehensive response!
Our tests are basically matching the linked results (we are testing with 2
OSDs/NVMe and fio/librbd too, but having a much smaller setup). Sometimes we
see smaller or higher improvements from Nautilus to Octupus but it is similar.
Only the rando
Oh, one other thing:
Check for background work, especially PG balancer. In all of my tests
the balancer was explicitly disabled. During benchmarks there may be a
high background workload affecting client IO if it's constantly
rebalancing the number of PGs in the pool.
Mark
On 6/4/20 11
Hi Stephan,
We recently ran a set of 3-sample tests looking at 2OSD/NVMe vs 1
OSD/NVMe RBD performance on Nautilus, Octopus, and Master on some of our
newer performance nodes with Intel P4510 NVMe drives. Those tests use
the librbd fio backend. We also saw similar randread and seq write
per
Thanks for your fast reply! We just tried all four possible combinations of
bluefs_preextend_wal_files and bluefs_buffered_io, but the write-iops in test
"usecase1" remain the same. By the way bluefs_preextend_wal_files has been
false in 14.2.9 (as in 15.2.3). Any other ideas?
David Orman wrot
Den tors 4 juni 2020 kl 16:29 skrev David Orman :
>* bluestore: common/options.cc: disable bluefs_preextend_wal_files <--
> from 15.2.3 changelogs. There was a bug which lead to issues on OSD
>
Given that preextended WAL files was mentioned as a speed increasing
feature in nautilus 14.2.3 re
* bluestore: common/options.cc: disable bluefs_preextend_wal_files <--
from 15.2.3 changelogs. There was a bug which lead to issues on OSD
restart, and I believe this was the attempt at mitigation until a proper
bugfix could be put into place. I suspect this might be the cause of the
symptoms y
10 matches
Mail list logo