Hi everyone,
Join us August 27th at 17:00 UTC to hear Pritha Srivastava present on
this month's Ceph Tech Talk: Secure Token Service in the Rados Gateway.
Calendar invite and archive can be found here:
https://ceph.io/ceph-tech-talks/
If you're interested or know someone who can present Sept
Here's the recording for July's Ceph Tech Talk. Thanks Yuval!
https://www.youtube.com/watch?list=PLrBUGiINAakM36YJiTT0qYepZTVncFDdc&v=XS7jpFxUYQ0&feature=emb_title
On 7/6/20 3:16 PM, Mike Perez wrote:
Hi everyone,
Get ready for July 23rd at 17:00 UTC another Ceph Tech Talk but a
different sc
OK I just wanted to confirm you hadn't extended the
osd_heartbeat_grace or similar.
On your large cluster, what is the time from stopping an osd (with
fasst shutdown enabled) to:
cluster [DBG] osd.317 reported immediately failed by osd.202
-- dan
On Thu, Aug 13, 2020 at 4:38 PM Manuel Lausch
Hi Dan,
The only settings in my ceph.conf related to down/out and peering are
this.
mon osd down out interval = 1800
mon osd down out subtree limit = host
mon osd min down reporters = 3
mon osd reporter subtree level = host
The Cluster has 44 Hosts á 24 OSDs
Manuel
On Thu, 13 Aug 2
Hi Manuel,
Just to clarify -- do you override any of the settings related to peer
down detection? heartbeat periods or timeouts or min down reporters
or anything like that?
Cheers, Dan
On Thu, Aug 13, 2020 at 3:46 PM Manuel Lausch wrote:
>
> Hi,
>
> I investigated an other problem with my nau
Hi,
I investigated an other problem with my nautilus 14.2.11 (with
14.2.10 as well) cluster.
If I stop the OSDs on one node (systemctl stop ceph-osd.target, or
shutdown/reboot) it took mostly several seconds until the cluster
detects the OSDs as down and I run in slow requests.
I identified the
Hi,
a customer lost 5 OSDs at the same time (and replaced them with new disks
before we could do anything…). 4 PGs were incomplete but could be repaired with
ceph-objectstore-tool. The cluster itself is healthy again.
Now some RBDs are missing. They are still listed in the rbd_directory object