Hello Yuval,
Thanks for your reply!
We continued digging in the problem and we found out that it was caused by a
recent change in our infrastructure.
Loadbalancer pods were added in front or rgw ones and those were logging an SSL
error.
As we weren't aware right away of that change we weren't
On 11/05/2022 23:21, Joost Nieuwenhuijse wrote:
After a reboot the OSD turned out to be corrupt. Not sure if
ceph-volume lvm new-db caused the problem, or failed because of
another problem.
I just ran into the same issue trying to add a db to an existing OSD.
Apparently this is a known bug:
Good afternoon everybody!
I have the following scenario:
Pool RBD replication x3
5 hosts with 12 SAS spinning disks each
I'm using exactly the following line with FIO to test:
fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
-iodepth=16 -rw=write -filename=./test.img
If I
On Thu, Aug 10, 2023, 17:36 Murilo Morais wrote:
> Good afternoon everybody!
>
> I have the following scenario:
> Pool RBD replication x3
> 5 hosts with 12 SAS spinning disks each
>
> I'm using exactly the following line with FIO to test:
> fio -ioengine=libaio -direct=1 -invalidate=1 -name=test
> I have the following scenario:
> Pool RBD replication x3
> 5 hosts with 12 SAS spinning disks each
>
> I'm using exactly the following line with FIO to test:
> fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
> -iodepth=16 -rw=write -filename=./test.img
>
> If I increase
> > Good afternoon everybody!
> >
> > I have the following scenario:
> > Pool RBD replication x3
> > 5 hosts with 12 SAS spinning disks each
> >
> > I'm using exactly the following line with FIO to test:
> > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=10G
> > -iodepth=16 -r
>
> Good afternoon everybody!
>
> I have the following scenario:
> Pool RBD replication x3
> 5 hosts with 12 SAS spinning disks each
Old hardware? SAS is mostly dead.
> I'm using exactly the following line with FIO to test:
> fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -s
Hi,
You can use the following formula to roughly calculate the IOPS you can get
from a cluster: (Drive_IOPS * Number_of_Drives * 0.75) / Cluster_Size.
For example, for 60 10K rpm SAS drives each capable of 200 4K IOPS and a
replicated pool with size 3: (~200 * 60 * 0.75) / 3 = ~3000 IOPS with blo
Em qui., 10 de ago. de 2023 às 12:47, Hans van den Bogert <
hansbog...@gmail.com> escreveu:
> On Thu, Aug 10, 2023, 17:36 Murilo Morais wrote:
>
> > Good afternoon everybody!
> >
> > I have the following scenario:
> > Pool RBD replication x3
> > 5 hosts with 12 SAS spinning disks each
> >
> > I'm
Em qui., 10 de ago. de 2023 às 13:01, Marc
escreveu:
> > I have the following scenario:
> > Pool RBD replication x3
> > 5 hosts with 12 SAS spinning disks each
> >
> > I'm using exactly the following line with FIO to test:
> > fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -size=1
It makes sense.
Em qui., 10 de ago. de 2023 às 16:04, Zakhar Kirpichenko
escreveu:
> Hi,
>
> You can use the following formula to roughly calculate the IOPS you can
> get from a cluster: (Drive_IOPS * Number_of_Drives * 0.75) / Cluster_Size.
>
> For example, for 60 10K rpm SAS drives each capabl
11 matches
Mail list logo