We had a fauly disk which was causing many errors, and replacement took a
while so we had to try to stop ceph from using the OSD in during this time.
However I think we must have done that wrong and after the disk replacement
our ceph orch seems to have picked up /dev/sdp and added the a new osd an
Dear Ceph community,
I'm facing an issue with ACL changes in the secondary zone of my Ceph
cluster after making modifications to the API name in the master zone of my
master zonegroup. I would appreciate any insights or suggestions on how to
resolve this problem.
Here's the background information
On 7/4/23 10:39, Matthew Booth wrote:
On Tue, 4 Jul 2023 at 10:00, Matthew Booth wrote:
On Mon, 3 Jul 2023 at 18:33, Ilya Dryomov wrote:
On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
On 7/3/23 04:53, Matthew Booth wrote:
On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote:
This contain
The information provided by Casey has been added to doc/radosgw/multisite.rst
in this PR: https://github.com/ceph/ceph/pull/52324
Zac Dover
Upstream Docs
Ceph Foundation
--- Original Message ---
On Saturday, July 1st, 2023 at 1:45 AM, Casey Bodley wrote:
>
>
> cc Zac, who has bee
thanks Yixin,
On Tue, Jul 4, 2023 at 1:20 PM Yixin Jin wrote:
>
> Hi Casey,
> Thanks a lot for the clarification. I feel that zonegroup made a great sense
> at the beginning when multisite feature was conceived and (I suspect) zones
> were always syncing from all other zones within a zonegroup
I've met this issue when try to upgrade octopus 15.2.17 to 16.2.13 last night.
Upgrade process failed at mgr module phase after the new MGR version become to
active state. I tried to enable debug `ceph config set mgr
mgr/cephadm/log_to_cluster_level debug
` and I saw the message like @xadhoom76
Hi , Matthew
I see "rbd with pwl cache: 5210112 ns", This latency is beyond my expectations
and I believe it is unlikely to occur. In theory, this value should be around a
few hundred microseconds. But I'm not sure what went wrong in your steps. Can
you use perf for latency analysis. Hi @Ily
Hi.
I have a Ceph (NVME) based cluster with 12 hosts and 40 OSD's .. currently it
is backfilling pg's but I cannot get it to run more than 20 backfilling (pgs)
at the same time (6+2 profile)
osd_max_backfills = 100 and osd_recovery_max_active_ssd = 50 (non-sane) but it
still stops at 20 with 4
Hi, I contact you for some question about quota.
Situation is following below.
1. I set the user quota 10M
2. Using s3 browser, upload one 12M file
3. The upload failed as i wish, but some object remains in the pool(almost 10M)
and s3brower doesn't show failed file.
I expected nothing to be lef
Hello!
I am looking to simplify ceph management on bare-metal by deploying Rook onto
kubernetes that has been deployed on bare metal (rke). I have used rook in a
cloud environment but I have not used it on bare-metal. I am wondering if
anyone here runs rook in bare-metal? Would you recommend it
Hello!
Releasing Reef
-
* RC2 is out but we still have several PRs to go, including blockers.
* RC3 might be worth doing but we Reef shall go before end of the month.
Misc
---
* For the sake of unittesting of dencoders interoperatbility we're going
to impose some extra w
Hi.
Fresh cluster - after a dance where the autoscaler did not work
(returned black) as described in the doc - I now seemingly have it
working. It has bumpted target to something reasonable -- and is slowly
incrementing pg_num and pgp_num by 2 over time (hope this is correct?)
But .
jskr@dkc
Morning,
we are running some ceph clusters with rook on bare metal and can very
much recomend it. You should have proper k8s knowledge, knowing how to
change objects such as configmaps or deployments, in case things go
wrong.
In regards to stability, the rook operator is written rather defensiv
Hi.
Fresh cluster - but despite setting:
jskr@dkcphhpcmgt028:/$ sudo ceph config show osd.0 | grep
recovery_max_active_ssd
osd_recovery_max_active_ssd 50
Hi,
This is incomplete multiparts I guess, you should remove it first. Don't know
how S3 Browser works with this entities
k
Sent from my iPhone
> On 6 Jul 2023, at 07:57, sejun21@samsung.com wrote:
>
> Hi, I contact you for some question about quota.
>
> Situation is following below.
>
15 matches
Mail list logo