Hi Frank,
I responded to a recent thread [1] about this, you should be able to
run that command for an OSD daemon.
Regards,
Eugen
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/A3NZ4DBMJT2KZQG5SMK6YXYZHIIBFBJW/
Zitat von Frank Schilder :
Dear cephers,
I see "Lon
I didn't look very closely but I didn't find a tracker issue for this
so maybe we should create one. I thought maybe OP from the thread I
responded to would have done that but apparently not. Do you want to
create it?
Zitat von Frank Schilder :
Thanks! I guess the message in ceph health d
Dear cephers,
I see "Long heartbeat ping times on back interface seen" in ceph status and
ceph health detail says that I should "Use ceph daemon mgr.# dump_osd_network
for more information". I tries, but it seems this command was removed during
upgrade from mimic 13.2.8 to 13.2.10:
[root@ceph-
(Only one of our test clusters saw this happen so far, during mimic
days, and this provoked us to move all MDSs to 64GB VMs, with mds
cache mem limit = 4GB, so there is a large amount of RAM available in
case it's needed.
Ours are running on machines with 128GB RAM. I tried limits between 4
Just to update the case for others: Setting
ceph config set osd/class:ssd osd_recovery_sleep 0.001
ceph config set osd/class:hdd osd_recovery_sleep 0.05
had the desired effect. I'm running another massive rebalancing operation right
now and these settings seem to help. It would be nice if one co
Thanks! I guess the message in ceph health detail should be changed then. Is
this already on the list?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: 06 December 2020 12:41:00
To: ceph-users@ce
Can do. I hope I don't forget it :)
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Eugen Block
Sent: 06 December 2020 13:55:13
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: ceph daemon m
I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd
du' shows still 60GB used.
When I fill the 34GB of space with an image, delete it and do again the
fstrim 'rbd du' still shows 59GB used.
Is this normal? Or should I be able to get it to ~30GB used?
Have you also tried the ‚rbd sparsify‘ command? It worked for me.
https://docs.ceph.com/en/latest/man/8/rbd/
Zitat von Marc Roos :
I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd
du' shows still 60GB used.
When I fill the 34GB of space with an image, delete it and do aga
At first this worked[1] but after I removed the snaphost I am back at
57GB[2] I ran rbd sparsify again after the snapshot was removed, but
stayed the same.
[1]
NAME PROVISIONED USED
x...@xxx.bak74 GiB 59 GiB
XXX74 GiB 37 GiB
74
Hi,
I have a production cluster and it is experiencing lot of DELETES since
many months. However, with the default gc configs - I did not see the
cluster space utilization going down. Moreover, the gc list has more than 4
million objects. I tried increasing the gc configs on 4 rados gateways and
fi
11 matches
Mail list logo