Your need to fix this first.
pgs: 0.056% pgs unknown
0.553% pgs not active
The back filling will cause slow I/O, but having pgs unknown and not active
will cause I/O blocking which your seeing with the VM booting.
Seems you have 4 OSD's down, if you get them back on
Hi,
here I describe 1 of the 2 major issues I'm currently facing in my 8
node ceph cluster (2x MDS, 6x ODS).
The issue is that I cannot start any virtual machine KVM or container
LXC; the boot process just hangs after a few seconds.
All these KVMs and LXCs have in common that their virtual disks
Hi,
We are looking for a way to set timeout on requests to rados gateway. If a
request takes too long time, just kill it.
1. Is there a command that can set the timeout?
2. This parameter looks interesting. Can I know what the "open threads"
means?
rgw op thread timeout
Description: The timeout
Any chance for a fix soon? In 14.2.5 ?
On Thu, Sep 19, 2019 at 8:44 PM Yan, Zheng wrote:
> On Thu, Sep 19, 2019 at 11:37 PM Dan van der Ster
> wrote:
> >
> > You were running v14.2.2 before?
> >
> > It seems that that ceph_assert you're hitting was indeed added
> > between v14.2.2. and v14.2.
On Fri, Sep 20, 2019 at 12:38 AM Guilherme Geronimo
wrote:
>
> Here it is: https://pastebin.com/SAsqnWDi
>
please set debug_mds to 10 and send detailed log to me
> The command:
>
> timeout 10 rm /mnt/ceph/lost+found/12430c8 ; umount -f /mnt/ceph
>
>
> On 17/09/2019 00:51, Yan, Zheng wrote:
Hello,
We recently upgraded from Luminous to Nautilus, after the upgrade, we are
seeing this sporadic "lock-up" behavior on the RGW side.
What I noticed from the log is that it seems to coincide with rgw realm
reloader. What we are seeing is that realm reloader tries to pause frontends,
and
Could you please share how you trimmed the usage log?
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Thu, Sep 19, 2019 at 11:46 PM shubjero wrote:
> Hey all,
>
> Yesterday our cluster went in to HEALTH_WARN due to 1 large omap
> object in the .usage pool (I've posted about this in the p
Hey all,
Yesterday our cluster went in to HEALTH_WARN due to 1 large omap
object in the .usage pool (I've posted about this in the past). Last
time we resolved the issue by trimming the usage log below the alert
threshold but this time it seems like the alert wont clear even after
trimming and (th
Btw:
root@deployer:~# cephfs-data-scan -v
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)
On 19/09/2019 13:38, Guilherme Geronimo wrote:
Here it is: https://pastebin.com/SAsqnWDi
The command:
timeout 10 rm /mnt/ceph/lost+found/12430c8 ; umount -f /mnt/c
Here it is: https://pastebin.com/SAsqnWDi
The command:
timeout 10 rm /mnt/ceph/lost+found/12430c8 ; umount -f /mnt/ceph
On 17/09/2019 00:51, Yan, Zheng wrote:
please send me crash log
On Tue, Sep 17, 2019 at 12:56 AM Guilherme Geronimo
wrote:
Thank you, Yan.
It took like 10 minutes t
10 matches
Mail list logo