[ceph-users] Re: Ceph bucket notification events stop working

2023-08-10 Thread daniel . yordanov1
Hello Yuval, Thanks for your reply! We continued digging in the problem and we found out that it was caused by a recent change in our infrastructure. Loadbalancer pods were added in front or rgw ones and those were logging an SSL error. As we weren't aware right away of that change we weren't

[ceph-users] Re: ceph-volume lvm new-db fails

2023-08-10 Thread Christian Rohmann
On 11/05/2022 23:21, Joost Nieuwenhuijse wrote: After a reboot the OSD turned out to be corrupt. Not sure if ceph-volume lvm new-db caused the problem, or failed because of another problem. I just ran into the same issue trying to add a db to an existing OSD. Apparently this is a known bug:

[ceph-users] librbd 4k read/write?

2023-08-10 Thread Murilo Morais
Good afternoon everybody! I have the following scenario: Pool RBD replication x3 5 hosts with 12 SAS spinning disks each I'm using exactly the following line with FIO to test: fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G -iodepth=16 -rw=write -filename=./test.img If I

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Hans van den Bogert
On Thu, Aug 10, 2023, 17:36 Murilo Morais wrote: > Good afternoon everybody! > > I have the following scenario: > Pool RBD replication x3 > 5 hosts with 12 SAS spinning disks each > > I'm using exactly the following line with FIO to test: > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Marc
> I have the following scenario: > Pool RBD replication x3 > 5 hosts with 12 SAS spinning disks each > > I'm using exactly the following line with FIO to test: > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G > -iodepth=16 -rw=write -filename=./test.img > > If I increase

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Marc
> > Good afternoon everybody! > > > > I have the following scenario: > > Pool RBD replication x3 > > 5 hosts with 12 SAS spinning disks each > > > > I'm using exactly the following line with FIO to test: > > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G > > -iodepth=16 -r

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Anthony D'Atri
> > Good afternoon everybody! > > I have the following scenario: > Pool RBD replication x3 > 5 hosts with 12 SAS spinning disks each Old hardware? SAS is mostly dead. > I'm using exactly the following line with FIO to test: > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -s

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Zakhar Kirpichenko
Hi, You can use the following formula to roughly calculate the IOPS you can get from a cluster: (Drive_IOPS * Number_of_Drives * 0.75) / Cluster_Size. For example, for 60 10K rpm SAS drives each capable of 200 4K IOPS and a replicated pool with size 3: (~200 * 60 * 0.75) / 3 = ~3000 IOPS with blo

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Murilo Morais
Em qui., 10 de ago. de 2023 às 12:47, Hans van den Bogert < hansbog...@gmail.com> escreveu: > On Thu, Aug 10, 2023, 17:36 Murilo Morais wrote: > > > Good afternoon everybody! > > > > I have the following scenario: > > Pool RBD replication x3 > > 5 hosts with 12 SAS spinning disks each > > > > I'm

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Murilo Morais
Em qui., 10 de ago. de 2023 às 13:01, Marc escreveu: > > I have the following scenario: > > Pool RBD replication x3 > > 5 hosts with 12 SAS spinning disks each > > > > I'm using exactly the following line with FIO to test: > > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=1

[ceph-users] Re: librbd 4k read/write?

2023-08-10 Thread Murilo Morais
It makes sense. Em qui., 10 de ago. de 2023 às 16:04, Zakhar Kirpichenko escreveu: > Hi, > > You can use the following formula to roughly calculate the IOPS you can > get from a cluster: (Drive_IOPS * Number_of_Drives * 0.75) / Cluster_Size. > > For example, for 60 10K rpm SAS drives each capabl