++adding
@ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io
++Adding d...@ceph.io
Thanks,&, Regards
Arihant Jain
On Mon, 27 Nov, 2023, 7:48 am AJ_ sunny, wrote:
> Hi team,
>
> After doing above changes I am still getting the issue in which machine
> c
Hi team,
Any update on this?
Thanks & Regards
Arihant Jain
On Mon, 27 Nov, 2023, 8:07 am AJ_ sunny, wrote:
> ++adding
> @ceph-users-confirm+4555fdc6282a38c849f4d27a40339f1b7e4bd...@ceph.io
>
> ++Adding d...@ceph.io
>
>
> Thanks,&, Regards
> Arihant Jain
>
sor or several across
> different hypervisors?
> The nova-compute.log doesn't seem to be enough, but you could also
> enable debug logs to see if it reveals more.
>
> Zitat von AJ_ sunny :
>
> > Hi team,
> >
> > After doing above changes I am still getting the is
Hello team,
I have one small size ceph cluster in production having 53 SSD of 7TB each
with 6 node
Version:- octopus
In last two days we just cleared out ~100 tb of data out of 370tb total
size
So there is bunch of pg with active+clean+snaptrim & snaptrim_wait state
comes into the action afte
Hello team,
Any update on this?
Why autoscaler taking so long or too slow
Thanks
Arihant Jain
On Wed, 23 Oct, 2024, 7:41 pm AJ_ sunny, wrote:
> Hello team,
>
> I have one small size ceph cluster in production having 53 SSD of 7TB each
> with 6 node
>
> Version:- octopus
&g