[ceph-users] Re: Performance in Proof-of-Concept cluster

2022-07-07 Thread Hans van den Bogert
Hi, Run a close to the metal benchmark on the disks first, just to see the theoretical ceiling. Also, rerun your benchmarks with random write, just to get more honest numbers as well. Based on the numbers so far, you seem to be getting 40k client iops @512 threads, due to 3x replication an

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-07 Thread Dan van der Ster
Hi again, I'm not sure the html mail made it to the lists -- resending in plain text. I've also opened https://tracker.ceph.com/issues/56488 Cheers, Dan On Wed, Jul 6, 2022 at 11:43 PM Dan van der Ster wrote: > > Hi Igor and others, > > (apologies for html, but i want to share a plot ;) ) > >

[ceph-users] Re: Performance in Proof-of-Concept cluster

2022-07-07 Thread Marc
> > Thanks for sharing. How many nodes/OSDs?, I get the following tail for > the same command and size=3 (3 nodes, 4 OSD each): 4 nodes, 2x ssd per node. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-07 Thread Konstantin Shalygin
Hi, > On 7 Jul 2022, at 13:04, Dan van der Ster wrote: > > I'm not sure the html mail made it to the lists -- resending in plain text. > I've also opened https://tracker.ceph.com/issues/56488 > I think with pacific you need to redeploy all OSD's to respec

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-07 Thread Dan van der Ster
Hi, On Thu, Jul 7, 2022 at 2:37 PM Konstantin Shalygin wrote: > > Hi, > > On 7 Jul 2022, at 13:04, Dan van der Ster wrote: > > I'm not sure the html mail made it to the lists -- resending in plain text. > I've also opened https://tracker.ceph.com/issues/56488 > > > I think with pacific you need

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-07 Thread Konstantin Shalygin
On 7 Jul 2022, at 15:41, Dan van der Ster wrote: > > How is one supposed to redeploy OSDs on a multi-PB cluster while the > performance is degraded? This is very strong point of view! Good that this case can be fixed with set bluestore_prefer_deferred_size_hdd to 128k, and I think we need anal

[ceph-users] Re: [ext] Re: snap_schedule MGR module not available after upgrade to Quincy

2022-07-07 Thread Kuhring, Mathias
Hey Andreas, Indeed, we were also possible to remove the legacy schedule DB and the scheduler is now picking up the work again. Wouldn't have known where to look for it. Thanks for your help and all the details. I really appreciate it. Best, Mathias On 7/7/2022 11:46 AM, Andreas Teuchert wrote:

[ceph-users] Re: OSD not created after replacing failed disk

2022-07-07 Thread Malte Stroem
Hello Vlad, - add the following to your yaml and apply it: unmanaged: true - go to the server that hosts the failed OSD - fire up cephadm shell, if it's not there install it and give the server the _admin label: ceph orch host label add servername _admin - ceph orch osd rm 494 - ceph-vo