please do not even think about using an EC pool (k=2, m=1). See other
posts here, just don't.
Why not?
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
(8425kB), run=60038-60038msec
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list -- c
W=601MiB/s (630MB/s)(35.2GiB/60003msec)
fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4M
--numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting
--name=journal-test
write: IOPS=679, BW=2717MiB/s (2849MB/s)(159GiB/60005msec)
--
With best regards,
Vitali
Especially https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21 but I
recommend you to read the whole article
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
r Debian release (10) will handle
the O_DSYNC flag differently?
Perhaps I should simply invest in faster (and bigger) harddisks and
forget the SSD-cluster idea?
Thank you in advance for any help,
Best Regards,
Hermann
--
With best regards,
Vitaliy Filippov
__
ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
eptember, 2019 15:18:23
Subject: Re: [ceph-users] Re: Strange hardware behavior
Please never use dd for disk benchmarks.
Use fio. For linear write:
fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M
-iodepth=32 -rw=write -runtime=60 -filename=/dev/sdX
--
Wi
gards,
Vitaliy Filippov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io