Hi,
have you tried restarting the primary OSD (currently 343)? It looks
like this PG is part of an EC pool, are there enough hosts available,
assuming your failure-domain is host? I assume that ceph isn't able to
recreate the shard on a different OSD. You could share your osd tree
and the
Hi ceph users/maintainers,
I installed ceph quincy on debian bullseye as a ceph client and now want
to update to bookworm.
I see that there is at the moment only bullseye supported.
https://download.ceph.com/debian-quincy/dists/bullseye/
Will there be an update of
deb https://download.coeh.c
Quincy brings support for 4K allocation unit but doesn't start using it
immediately. Instead it falls back to 4K when bluefs is unable to
allocate more space with the default size. And even this mode isn't
permanent, bluefs attempts to bring larger units back from time to time.
Thanks,
Igor
Hi Siddhit
You need more OSD's. Please read:
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#erasure-coded-pgs-are-not-active-clean
Greetings
Damian
On 2023-06-20 15:53, siddhit.ren...@nxtgen.com wrote:
Hello All,
Ceph version: 14.2.5-382-g8881d33957
(8881d33957b5
Dear Ceph community,
We want to restructure (i.e. move around) a lot of data (hundreds of
terabyte) in our CephFS.
And now I was wondering what happens within snapshots when I move data
around within a snapshotted folder.
I.e. do I need to account for a lot increased storage usage due to older
Thank you Eugen!
After finding what the target name actually was it all worked like a charm.
Best regards, Mikael
On Wed, Jun 21, 2023 at 11:05 AM Eugen Block wrote:
> Hi,
>
> > Will that try to be smart and just restart a few at a time to keep things
> > up and available. Or will it just trig
Hi Eugen,
Thank you so much for the details. Here is the update (comments in-line >>):
Regards,
Anantha
-Original Message-
From: Eugen Block
Sent: Monday, June 19, 2023 5:27 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Grafana service fails to start due to bad directory
name af
Hi ,
Not sure if the lables are really removed or the update is not working?
root@fl31ca104ja0201:/# ceph orch host ls
HOST ADDR LABELS
STATUS
fl31ca104ja0201 XX.XX.XXX.139 ceph clients mdss mgrs monitoring mons osds rgws
fl31c
hi Jayanth,
i don't know that we have a supported way to do this. the
s3-compatible method would be to copy the object onto itself without
requesting server-side encryption. however, this wouldn't prevent
default encryption if rgw_crypt_default_encryption_key was still
enabled. furthermore, rgw ha
Hey,
Just to confirm my understanding: If I set up a 3-osd cluster really
fast with an EC42 pool, and I set the crush map to osd failover
domain, the data will be distributed among the osd's, and of course
there won't be protection against host failure. And yes, I know that's
a bad idea, but I nee
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
We started noticing some unexpected performance issues with iSCSI. I mean,
an SSD pool is reaching 100MB of write speed for an
11 matches
Mail list logo