You could stop the MGRs and wait for the recovery to finish, MGRs are
not a critical component. You won’t have a dashboard or metrics
during/of that time but it would prevent the high RAM usage.
Zitat von "Ing. Luis Felipe Domínguez Vega" <luis.doming...@desoft.cu>:
El 2020-10-26 12:23, 胡 玮文 escribió:
在 2020年10月26日,23:29,Ing. Luis Felipe Domínguez Vega
<luis.doming...@desoft.cu> 写道:
mgr: fond-beagle(active, since 39s)
Your manager seems crash looping, it only started since 39s. Looking
at mgr logs may help you identify why your cluster is not recovering.
You may hit some bug in mgr.
Noup, I'm restarting the ceph manager because they eat all server
RAM and then i have an script that when i have 1GB of Free Ram (the
server has 94 Gb of RAM) then restart the manager, i dont known why
and the logs of manager are:
-----------------------------------
root@fond-beagle:/var/lib/ceph/mon/ceph-fond-beagle/store.db# tail
-f /var/log/ceph/ceph-mgr.fond-beagle.log
2020-10-26T12:54:12.497-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v584: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:12.497-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
2020-10-26T12:54:14.501-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v585: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:14.501-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
2020-10-26T12:54:16.517-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v586: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:16.517-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
2020-10-26T12:54:18.521-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v587: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:18.521-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
2020-10-26T12:54:20.537-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v588: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:20.537-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
2020-10-26T12:54:22.541-0400 7f2a8112b700 0 log_channel(cluster)
log [DBG] : pgmap v589: 2305 pgs: 4
active+undersized+degraded+remapped, 4
active+recovery_unfound+undersized+degraded+remapped, 2104
active+clean, 5 active+undersized+degraded, 34 incomplete, 154
unknown; 1.7 TiB data, 2.9 TiB used, 21 TiB / 24 TiB avail;
347248/2606900 objects degraded (13.320%); 107570/2606900 objects
misplaced (4.126%); 19/404328 objects unfound (0.005%)
2020-10-26T12:54:22.541-0400 7f2a8112b700 0 log_channel(cluster)
do_log log to syslog
---------------
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io