[ceph-users] Re: Optimal number of placement groups per OSD

2025-05-01 Thread gagan tiwari
Thanks Jane! I will go with 2048 PG. Thanks, Gagan On Thu, May 1, 2025 at 12:49 PM Janne Johansson wrote: > Den tors 1 maj 2025 kl 09:12 skrev gagan tiwari > : > > > > HI Janne, > > Thanks for the explanation. > > > > So, using all 10X15 Disks on

[ceph-users] Re: Optimal number of placement groups per OSD

2025-05-01 Thread Janne Johansson
Den tors 1 maj 2025 kl 09:12 skrev gagan tiwari : > > HI Janne, > Thanks for the explanation. > > So, using all 10X15 Disks on 7 OSD nodes. Number PG will be :- > > ( 10X7X100 ) / 6 = 1166.666 nearest power of 2 is 2028. > > So, I will need to set 2028 placement groups. With

[ceph-users] Re: Optimal number of placement groups per OSD

2025-05-01 Thread gagan tiwari
HI Janne, Thanks for the explanation. So, using all 10X15 Disks on 7 OSD nodes. Number PG will be :- ( 10X7X100 ) / 6 = 1166.666 nearest power of 2 is 2028. So, I will need to set 2028 placement groups. With 2028 PG , three will be 12,168 pieces to be spread out on 70 OSDs

[ceph-users] Re: Optimal number of placement groups per OSD

2025-04-30 Thread Janne Johansson
> Hi Guys, > I have 7 OSD nodes with 10X15T NVME disk on each OSD node. > > To start with , I want to use only 8X15T disk on each osd node and keep > 2X15 disk spare in case of any disk failure and recovery event. > > I am going to use the 4X2 EC CephFS data pool to store data. > >