Hi,
I have two Ceph clusters in a multi-zone setup. The first one (master zone)
would be accessible to users for their interaction using RGW.
The second one is set to sync from the master zone with the tier type of the
zone set as an archive (to version all files).
My question here is. Is there
Yesterday I caught the cluster while OSD_SLOW_PING_TIME_FRONT was active:
# ceph health detail
HEALTH_WARN 4 slow ops, oldest one blocked for 9233 sec, daemons
[mon.aka,mon.balin] have slow ops.; (muted: OSD_SLOW_PING_TIME_BACK
OSD_SLOW_PING_TIME_FRONT)
(MUTED, STICKY) [WRN] OSD_SLOW_PING_TIME_F
Hi Anthony,
Thank you for posting that link. I can see there that the description for that
option is:'Inject an expensive sleep during deep scrub IO to make it easier to
induce preemption'
Does this mean that it is supposed to be used in conjunction with the 'osd
scrub max pre-emption' option?
Hi,
Thanks. Someone told me that we could just destroy the FileStore OSD’s
and recreate them as BlueStore, even though the cluster is partially
upgraded. So I guess I’ll just do that. (Unless someone here tells me
that that’s a terrible idea :))
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
Hi,
I don't really have an answer, but there was a bug with snap mapper
[1], [2] is supposed to verify consistency, but Octopus is EOL so you
might need to upgrade directly to Pacific. That's what we did on
multiple clusters (N --> P) a few months back. I'm not sure if it
would just work
Documented here:
https://github.com/ceph/ceph/blob/9754cafc029e1da83f5ddd4332b69066fe6b3ffb/src/common/options/global.yaml.in#L3202
Introduced back here with a bunch of other scrub tweaks:
https://github.com/ceph/ceph/pull/18971/files
Are your OSDs HDDs? Using EC?
How many deep scrubs do you h
Hello,
I'm sorry for not getting back to you sooner.
[2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout
ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-
2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.block_uuid=
b4WDQQ-eMTb-AN1U-D7dk-yD2q-4dPZ-KyFrHi,ceph.cephx_
l
Hi Andras,
On Sat, Feb 4, 2023 at 1:59 AM Andras Pataki
wrote:
>
> We've been running into a strange issue with ceph-fuse on some nodes
> lately. After some job runs on the node (and finishes or gets killed),
> ceph-fuse gets stuck busy requesting objects from the OSDs without any
> processes on
Hi,
I don't really know how the IP address is determined by the mgr, I
only remember that there was a change introduced in 16.2.11 [1] to use
the hostname instead of an IP address. In a 16.2.9 cluster I have all
storage nodes (including rgw) configured with multiple ip addresses,
and it c
I have been running a ceph cluster for a while and one of the main things that impacts performance is deep-scrubbing. I would like to limit this as much as possible and have tried the below options to do this:osd scrub sleep = 1 # Time to sleep before scrubbing next group of ch
I figured out that I'll need to update the lv tag `ceph.cephx_lockbox_secret`
as well.
-Zhongzhou Cai
On Mon, Feb 6, 2023 at 5:27 PM Zhongzhou Cai
wrote:
> Hi,
>
> I'm on Ceph 16.2.10, and I'm trying to rotate the ceph lockbox keyring. I
> used ceph-authtool to create a new keyring, and used `
11 matches
Mail list logo