[ceph-users] Re: Patch change for CephFS subvolume

2023-08-23 Thread Anh Phan Tuan
Not really sure what you want, but for simplicity, just move folder to following structure: /volumes/[Sub Volume Group Name]/[Sub Volume Name] ceph will recognize it (no extend attr needed), if you use subvolumegroup name difference than "_nogroup", you must provide it in all subvolume command [-

[ceph-users] Re: Create OSDs MANUALLY

2023-08-22 Thread Anh Phan Tuan
You don't need to create OSDs manual to get what you want. Cephadm has two options to control that in OSD specification. OSD Service — Ceph Documentation block_db_size*: Union[int, str, None]*

[ceph-users] Re: CephFS metadata outgrow DISASTER during recovery

2023-08-09 Thread Anh Phan Tuan
Hi All, It seems I also faced a similar case last year. I have about 160 x HDD mixed size and 12 x 480GB nvme ssd for the metadata pool. I am aware of incidents when ssd osd go to near full state, I increase nearfull ratio but these osd continue to grow for unknown reason. This is production so

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-10-05 Thread Anh Phan Tuan
ph.com/issues/56031> Regards, Anh Phan On Fri, Sep 16, 2022 at 2:34 AM Christophe BAILLON wrote: > Hi > > The problem is still present in version 17.2.3, > thanks for the trick to work around... > > Regards > > ----- Mail original - > > De: "Anh Phan Tuan&

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-08-11 Thread Anh Phan Tuan
Hi Patrick, I am also facing this bug when deploying a new cluster at the time 16.2.7 release. The bugs relative to the way ceph calculator db_size form give db disk. Instead of : slot db size = size of db disk / num slot per disk. Ceph calculated the value: slot db size = size of db disk (just