hi,
I want to know whether the multisite sync policy can handle the following
scenario:
1. bidirectional/symmetric replication of all buckets with names beginning
'devbuckets-*'2, replication of all prefixes in those buckets, EXCEPT, say,
'tmp/'
The documentation is not clear on this and want
Hi,
I have a multisite system with two sites on 18.2.2, on Rocky 8.
I have set up a sync policy to allow replication between sites. I have also
createda policy for a given bucket that prevents replication on that given
bucket. This allworks just fine, and objects I create in that bucket on side
Hi,
I see that 18.2.4 is out, in rpm for el9 at:
http://download.ceph.com/rpm-18.2.4/ Are there any plans for an '8' version?
One of my clusters is not yet ready to update to Rocky 9. We will update to 9
moving forward but this itme around it would be good to have a Rocky 8 version.
Thanks!
c
on the slave zone.
The bucket has a lifecycle policy set to delete objects after 3 days. But
objects on the slave zone never get deleted.
How can I get this to work?
Thanks
Chris
On Friday, July 12, 2024 at 04:30:38 PM MDT, Christopher Durham
wrote:
Hi,
I have a multisite system with
p would be appreciated. Thanks
-Chris
On Monday, September 2, 2024 at 12:56:23 PM MDT, Soumya Koduri
wrote:
On 9/2/24 8:41 PM, Christopher Durham wrote:
> Asking again, does anyone know how to get this working?
> I have multisite sync set up between two sites. Due to bandwidth con
ation happens finefor buckets that are replicated.
I am hoping my issue is related to 18.2.2 vs 18.2.4, which I will update to
soon on my loaded cluster.Any further thoughts are appreciated.
-Chris
On Tuesday, September 3, 2024 at 04:31:46 PM MDT, Christopher Durham
wrote:
Soumya,
Than
Hi,
I am uisng 15.2.7 on CentOS 8.1. I have a number of old buckets that are listed
with
# radosgw-admin metadata list bucket.instance
but are not listed with:
# radosgw-admin bucket list
Lets say that one of them is:
'old-bucket' and its instance is 'c100feda-5e16-48a4-b908-7be61aa877ef.123.1'
Hi,
There is a potential that my Ceph RGW multi site soluton may be down for an
extended time (2 weeks?) for a physical relocation. Some questions,
particularly in regard to RGW
1. Is there any limit on downtime after which I might have to restart an entire
sync? I want to still be able to wri
Hi,
The following article:
https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/
suggests that dsabling C-states on your CPUs (on the OSD nodes) as one method
to improve performance. The article seems to indicate that the scenariobeing
addressed in the article was with NVMEs as OSDs.
Ques
Hi,
I am upgrading my test cluster from 17.2.6 (quincy) to 18.2.2 (reef).
As it was an rpm install, i am following the directions here:
Reef — Ceph Documentation
|
|
| |
Reef — Ceph Documentation
|
|
|
The upgrade worked, but I have some observations and questions before I move to
my
Hi,
I have a reef cluster 18.2.2 on Rocky 8.9. This cluster has been upgraded from
pacific->quincy->reef over the past few years. It is a multi site with one
other cluster that works fine with s3/radosgw on both sides, with proper
bidirectional data replication.
On one of the master cluster's
Hello,
I am uisng 18.2.2 on Rocky 8 Linux.
I am getting http error 500 whe trying to hit the ceph dashboard on reef 18.2.2
when trying to look at any of the radosgw pages.
I tracked this down to /usr/share/ceph/mgr/dashboard/controllers/rgw.py
It appears to parse the metadata for a given radosgw
the
metadata is updated.
Best wishes,
Pierre Riteau
On Wed, 8 May 2024 at 19:51, Christopher Durham wrote:
> Hello,
> I am uisng 18.2.2 on Rocky 8 Linux.
>
> I am getting http error 500 whe trying to hit the ceph dashboard on reef
> 18.2.2 when trying to look at any of the rados
I have both a small test cluster and a larger production cluster. They are
(were, for the test cluster) running Rocky Linux 8.9. They are both updated
originally from Pacific, currently at reef 18.2.2.These are all rpm installs.
It has come time to consider upgrades to Rocky 9.3. As there is no
We have a reef 18.2.2 cluster with 6 radosgw servers on Rocky 8.9. The radosgw
servers are not fronted by anything like HAProxy as the clients connect
directly to a DNS name via a round-robin DNS. Each of the radosgw servers have
a certificate using SAN entries for all 6 radosgw servers as well
Hello,
I am using pacific 16.2.10 on Rocky 8.6 Linux.
After setting upmap_max_deviation to 1 on the ceph balancer in ceph-mgr, I
achieved a near perfect balance of PGs and space on my OSDs. This is great.
However, I started getting the following errors on my ceph-mon logs, every
three minutes,
root
step choose indep 4 type rack
step chooseleaf indep 2 type host
step emit }
-Chris
-Original Message-
From: Dan van der Ster
To: Christopher Durham
Cc: Ceph Users
Sent: Mon, Oct 10, 2022 12:22 pm
Subject: [ceph-users] Re: crush hierarchy backwards and upmaps ...
Hi,
int.
Any other thoughts would be appreciated.
-Chris
-Original Message-
From: Dan van der Ster
To: Christopher Durham
Cc: Ceph Users
Sent: Tue, Oct 11, 2022 11:39 am
Subject: [ceph-users] Re: crush hierarchy backwards and upmaps ...
Hi Chris,
Just curious, does this rule make sense
know what happens, although it could take a bit of time both
inscheduling it and then execution.
Thank you for your help, it is appreciated.
-Chris
-Original Message-
From: Dan van der Ster
To: Christopher Durham
Cc: ceph-users@ceph.io
Sent: Fri, Oct 14, 2022 2:25 am
Subject: [ceph
Hi,
I've seen Dan's talk:
https://www.youtube.com/watch?v=0i7ew3XXb7Q
and other similar ones that talk about CLUSTER size.
But, I see nothing (perhaps I have not looked hard enough), on any
recommendations regarding max POOL size.
So, are there any limitations on a given pool that has all OSDs of
Hello,
In what releases of ceph is s3 select in rgw available? From what I can glean,
we have:
pacific: csvquincy: csv, parquet
Am I correct?
Thanks!
-Chris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-l
Hello,
I have an ec pool set as 6+2. I have noticed that when rebooting servers during
system upgrades, I get pgs set to inactive while the osds are down. I then
discovered that my min_size for the pool is set to 7, which makes sense that if
I reboot two servers that host a pg with OSDs on bot
Hi,
There are various articles, case studies, etc about large ceph clusters,
storing 10s of PiB,with CERN being the largest cluster as far as I know.
Is there a largest pool capacity limit? In other words, while you may have a
30PiB cluster,is there a limit or recommendation as to max pool capa
Hi,
Is there any information on this issue? Max number of OSDs per pool, or maxpool
size (data) as opposed to cluster size? Thanks!
-Chris
-Original Message-
From: Christopher Durham
To: ceph-users@ceph.io
Sent: Thu, Dec 15, 2022 5:36 pm
Subject: max pool size (amount of data
Hi,
For a given crush rule and pool that uses it, how can I verify hat the pgs in
that pool folllow the rule? I have a requirement to 'prove' that the pgs are
mapping correctly.
I see: https://pypi.org/project/crush/
This allows me to read in a crushmap file that I could then use to verify a pg
cluster. Just one example:
crushtool -i crushmap.bin --test --rule 5 --show-mappings --num-rep 6 | head
CRUSH rule 5 x 0 [19,7,13,22,16,28]
CRUSH rule 5 x 1 [21,3,15,31,19,7]
[...]
Regards,
Eugen
Zitat von Christopher Durham :
> Hi,
> For a given crush rule and pool that uses it, how can I ver
-Original Message-
From: Stephen Smith6
To: Christopher Durham ; ceph-users@ceph.io
Sent: Wed, Jan 11, 2023 2:00 pm
Subject: Re: [ceph-users] pg mapping verification
#yiv3876159325 filtered {}#yiv3876159325 filtered {}#yiv3876159325
p.yiv3876159325MsoNormal, #yiv3876159325
Question:
What does the future hold with regard to cephadm vs rpm/deb packages? If it is
now suggested to use cephadm and thus containers to deploy new clusters, what
does the future hold? Is there an intent, at sometime in the future, to no
longer support rpm/deb packages for Linux systems, a
Hi,
I see that this PR: https://github.com/ceph/ceph/pull/48030
made it into ceph 17.2.6, as per the change log at:
https://docs.ceph.com/en/latest/releases/quincy/ That's great.
But my scenario is as follows:
I have two clusters set up as multisite. Because of the lack of replication
for IAM
d-or-rm-mons/
-Chris
-Original Message-
From: Casey Bodley
To: Christopher Durham
Cc: ceph-users@ceph.io
Sent: Tue, Apr 11, 2023 1:59 pm
Subject: [ceph-users] Re: ceph 17.2.6 and iam roles (pr#48030)
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote:
>
> On Tue, Apr 11, 2023 at
Hi,
After doing a 'radosgw-admin metadata sync init' on a secondary site in a
2-cluster multisite configuration, (See my previous post on doing a quincy
upgade with sts roles manually synced between primary and secondary), and
letting it sync all metadata from a master zone, I get the following
Hi,
I am using 17.2.6 on rocky linux for both the master and the slave site
I noticed that:
radosgw-admin sync status
often shows that the metadata sync is behind a minute or two on the slave. This
didn't make sense, as the metadata isn't changing as far as I know.
radosgw-admin mdlog list
(on
Hi,
I am using ceph 17.2.6 on rocky linux 8.
I got a large omap object warning today.
Ok, So I tracked it down to a shard for a bucket in the index pool of an s3
pool.
However, when lisitng the omapkeys with:
# rados -p pool.index listomapkeys .dir.zone.bucketid.xx.indexshardnumber
it is clear th
's/^.1001_/echo -n -e x801001_/'; echo ' > mykey &&
rados rmomapkey -p default.rgw.buckets.index
.dir.zone.bucketid.xx.indexshardnumber --omap-key-file mykey';
done < uglykeys.txt
On Tue, Jul 18, 2023 at 9:27 AM Christopher Durham wrote:
Hi,
I am using ceph 17.2.6 on r
77/metadata.gz
# cat do_remove.sh
# usage: "bash do_remove.sh | sh -x"
while read f;
do
echo -n $f | sed 's/^.1001_/echo -n -e x801001_/'; echo ' > mykey &&
rados rmomapkey -p default.rgw.buckets.index
.dir.zone.bucketid.xx.indexshardnumber --omap-key-
ot;
while read f;
do
echo -n $f | sed 's/^.1001_/echo -n -e x801001_/'; echo ' > mykey &&
rados rmomapkey -p default.rgw.buckets.index
.dir.zone.bucketid.xx.indexshardnumber --omap-key-file mykey';
done < uglykeys.txt
On Tue, Jul 18, 2023 at 9:27 AM Chri
Hi,
I am using 17.2.6 on Rocky Linux 8
The ceph mgr dashboard, in my situation, (bare metal install, upgraded from
15->16-> 17.2.6), can no longer hit the ObjectStore->(Daemons,Users,Buckets)
pages.
When I try to hit those pages, it gives an error:
RGW REST API failed request with status code 40
ildcard certificates only,
until then we let ssl-verify disabled. But I didn't check the tracker
for any pending tickets, so someone might be working on it.
Regards,
Eugen
[1] https://github.com/ceph/ceph/pull/47207/files
Zitat von Christopher Durham :
> Hi,
> I am using 17.2.6 o
I am using ceph 17.2.6 on Rocky 8.
I have a system that started giving me large omap object warnings.
I tracked this down to a specific index shard for a single s3 bucket.
rados -p listomapkeys .dir..bucketid.nn.shardid
shows over 3 million keys for that shard. There are only about 2
million obj
ar to be replication log entries
for multisite. can you confirm that this is a multisite setup? is the
'bucket sync status' mostly caught up on each zone? in a healthy
multisite configuration, these log entries would eventually get
trimmed automatically
On Wed, Sep 20, 2023 at 7:08 PM Christo
e if I can narrow
things down as soon as the issue starts, and maybe garner more datato track
this down further. We'll eee.
-chris
On Thursday, September 21, 2023 at 11:17:51 AM MDT, Casey Bodley
wrote:
On Thu, Sep 21, 2023 at 12:21 PM Christopher Durham wrote:
>
>
> Hi Cas
x27;/'
We do plan to go to reef, but we are not quite there yet.
-Chris
On Thursday, September 21, 2023 at 11:17:51 AM MDT, Casey Bodley
wrote:
On Thu, Sep 21, 2023 at 12:21 PM Christopher Durham wrote:
>
>
> Hi Casey,
>
> This is indeed a multisite setup
I have a 2-site multisite configuration on cdnh 18.2.4 on EL9.
After system updates, we discovered that a particular bucket had several
thousand objects missing, which the other side had. Newly created objects were
being replicated just fine.
I decided to 'restart' syncing that bucket. Here is
Casey,
OR, is there a way to continue on with new data syncing (incremental) as the
full sync catches up, as the full sync will take a long time, and no new
incremental data is being replicated.
-Chris
On Wednesday, November 20, 2024 at 03:30:40 PM MST, Christopher Durham
wrote
Casey,
Thanks for your response. So is there a way to abandon a full sync and just
move on with an incremental from the time you abandon the full sync?
-Chris
On Wednesday, November 20, 2024 at 12:29:26 PM MST, Casey Bodley
wrote:
On Wed, Nov 20, 2024 at 2:10 PM Christopher Durham
ync.
Would radosgw-admin bucket sync disable --bucket followed by
radosgw-admin bucket sync enable --bucket do this? Or would that do
anotherfull sync and not an incremental? Thanks
-Chris
On Thursday, November 14, 2024 at 04:18:34 PM MST, Christopher Durham
wrote:
Hi,
I have heard
t to think it
is up to date and only replicate new objects from the time it was marked up to
date?
Thanks for any information
-Chris
On Friday, November 8, 2024 at 03:45:05 PM MST, Christopher Durham
wrote:
I have a 2-site multisite configuration on cdnh 18.2.4 on EL9.
Afte
Hello,
I have reef 18.2.4 on Rocky9.
I have a multi-site environment with one zone on each side of the multi-site.
Replication appears to be working in both directions.
My question is, is lifecycle processing independent on each side?
I have a lifecycle json installed to delete all objects in a g
-link everything with a lifecycle policy
On Fri, Dec 6, 2024 at 5:04 PM Christopher Durham wrote:
>
>
> I have 18.2.4 on Rocky 9 Linux.This system has been updated from octopus ->
> pacific -> quincy (18.2.2) -> (el8->el9 reinstall of each server, but ceph
> osd a
Bodley
wrote:
hey Chris,
On Wed, Nov 20, 2024 at 6:02 PM Christopher Durham wrote:
>
> Casey,
>
> OR, is there a way to continue on with new data syncing (incremental) as the
> full sync catches up, as the full sync will take a long time, and no new
> incremental data i
I have 18.2.4 on Rocky 9 Linux.This system has been updated from octopus ->
pacific -> quincy (18.2.2) -> (el8->el9 reinstall of each server, but ceph osd
and mon survival) -> reef (18.2.4) over several years.
It appears that I have two probably related problems with lifecycle expiration
in a
Hi,
The non-cephadm update procedure to update reef to squid (for rpm-based
clusters) here:
https://docs.ceph.com/en/latest/releases/squid/#upgrading-non-cephadm-clusters
suggests that your monitors/mds/radosgw/osds are all on separate servers. While
perhaps theyshould be that is not possible at
Hi,
When will rpms for Linux EL10 and clones such as Rocky Linux be available, and
what ceph release will it be? I am not specifically asking when RedHat or
IBMthemselves will release on this version, I am looking for el10 rpms at
https://download.ceph.com. Thanks
-Chris
_
In my 19.2.2/squid cluster, (Rocky 9 Linux) I am trying to determine if I am
having
issues with OSD latency. The following URL:
https://sysdig.com/blog/monitoring-ceph-prometheus/
states the following about prometheus metrics:
* ceph_osd_op_r_latency_count: Returns the number of reading operat
54 matches
Mail list logo