[ceph-users] Re: openstack Vm shutoff by itself

2023-11-26 Thread AJ_ sunny
++adding @ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io ++Adding d...@ceph.io Thanks,&, Regards Arihant Jain On Mon, 27 Nov, 2023, 7:48 am AJ_ sunny, wrote: > Hi team, > > After doing above changes I am still getting the issue in which machine > c

[ceph-users] Re: openstack Vm shutoff by itself

2023-11-26 Thread AJ_ sunny
Hi team, Any update on this? Thanks & Regards Arihant Jain On Mon, 27 Nov, 2023, 8:07 am AJ_ sunny, wrote: > ++adding > @ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io > > ++Adding d...@ceph.io > > > Thanks,&, Regards > Arihant Jain >

[ceph-users] Re: openstack Vm shutoff by itself

2023-11-29 Thread AJ_ sunny
sor or several across > different hypervisors? > The nova-compute.log doesn't seem to be enough, but you could also > enable debug logs to see if it reveals more. > > Zitat von AJ_ sunny : > > > Hi team, > > > > After doing above changes I am still getting the is

[ceph-users] PG autoscaler taking too long

2024-10-23 Thread AJ_ sunny
Hello team, I have one small size ceph cluster in production having 53 SSD of 7TB each with 6 node Version:- octopus In last two days we just cleared out ~100 tb of data out of 370tb total size So there is bunch of pg with active+clean+snaptrim & snaptrim_wait state comes into the action afte

[ceph-users] Re: PG autoscaler taking too long

2024-10-23 Thread AJ_ sunny
Hello team, Any update on this? Why autoscaler taking so long or too slow Thanks Arihant Jain On Wed, 23 Oct, 2024, 7:41 pm AJ_ sunny, wrote: > Hello team, > > I have one small size ceph cluster in production having 53 SSD of 7TB each > with 6 node > > Version:- octopus &g