Hello
I found myself in the following situation:
[WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive
pg 4.3d is stuck inactive for 8d, current state
activating+undersized+degraded+remapped, last acting
[4,NONE,46,NONE,10,13,NONE,74]
pg 4.6e is stuck inactive for 9d, current
Hello. We have a requirement to change the hostname on some of our OSD
nodes. All of our nodes are Ubuntu 22.04 based and have been deployed
using 17.2.7 Orchestrator.
1. Is there a procedure to rename the existing node, without rebuilding
and have it detected by Ceph Orchestrator?
If not,
2
Hello,
I have a Qunicy (17.2.6) cluster, looking to create a multi-zone /
multi-region RGW service and have a few questions with respect to published
docs - https://docs.ceph.com/en/quincy/radosgw/multisite/.
In general, I understand the process as:
1. Create a new REALM, ZONEGROUP, ZONE
Hello,
I have a few hosts about to add into a cluster that have a multipath
storage config for SAS devices.Is this supported on Quincy, and how
would ceph-orchestrator and / or ceph-volume handle multipath storage?
Here's a snip of lsblk output of a host in question:
# lsblk
NAME
h pool contains. A pool with 5% of the data needs fewer PGs than a
> pool with 50% of the cluster’s data.
>
> Others may well have different perspectives, this is something where
> opinions vary. The pg_autoscaler in bulk mode can automate this, if one is
> prescient with feeding i
Hello
Looking to get some official guidance on PG and PGP sizing.
Is the goal to maintain approximately 100 PGs per OSD per pool or for the
cluster general?
Assume the following scenario:
Cluster with 80 OSD across 8 nodes;
3 Pools:
- Pool1 = Replicated 3x
- Pool2 = Repli
environment first. If you already wiped the temporary OSDs I don't see
a chance to recover from this.
Regards,
Eugen
[2] https://docs.ceph.com/en/pacific/man/8/ceph-objectstore-tool/
Zitat von Deep Dish :
> Thanks for the insight Eugen.
>
> Here's what basically happened:
>
&
bly not work.
Regards,
Eugen
[1] https://www.mail-archive.com/ceph-users@ceph.io/msg14757.html
Zitat von Deep Dish :
> Hello. I really screwed up my ceph cluster. Hoping to get data off it
> so I can rebuild it.
>
> In summary, too many changes too quickly caused the cluster to
Hello. I really screwed up my ceph cluster. Hoping to get data off it
so I can rebuild it.
In summary, too many changes too quickly caused the cluster to develop
incomplete pgs. Some PGS were reporting that OSDs were to be probes.
I've created those OSD IDs (empty), however this wouldn't clea
Hello. I really screwed up my ceph cluster. Hoping to get data off it
so I can rebuild it.
In summary, too many changes too quickly caused the cluster to develop
incomplete pgs. Some PGS were reporting that OSDs were to be probes.
I've created those OSD IDs (empty), however this wouldn't clea
lt
No default realm is set
# radosgw-admin realm list-periods
failed to read realm: (2) No such file or directory
# rados ls -p .rgw.root
zonegroup_info.45518452-8aa6-41b4-99f0-059b255c31cd
zone_info.743ea532-f5bc-4cca-891b-c27a586d5129
zone_names.default
zonegroups_names.default
O
repeer
> ceph pg repair
>
> Pavin.
>
> On 29-Dec-22 4:08 AM, Deep Dish wrote:
> > Hi Pavin,
> >
> > The following are additional developments.. There's one PG that's
> > stuck and unable to recover. I've attached relevant ceph -s / health
>
Hi Pavin,
The following are additional developments.. There's one PG that's
stuck and unable to recover. I've attached relevant ceph -s / health
detail and pg stat outputs below.
- There were some remaining lock files as suggested in /var/run/ceph/
pertaining to rgw. I removed the service, d
rhaps try this [1].
>
> [0]: https://docs.ceph.com/en/quincy/mgr/crash/
> [1]: https://docs.podman.io/en/latest/markdown/podman-logs.1.html
>
> On 27-Dec-22 11:59 PM, Deep Dish wrote:
> > HI Pavin,
> >
> > Thanks for the reply. I'm a bit at a loss honestly as t
in your situation:
>
> 1. Logs
> 2. Is the RGW HTTP server running on its port?
> 3. Re-check config including authentication.
>
> ceph orch is too new and didn't pass muster in our own internal testing.
> You're braver than most for using it in production.
>
):
# ceph dashboard get-rgw-api-access-key
P?G (? commented out)
Seems to me like my RGW config is non-existent / corrupted for some
reason. When trying to curl a RGW directly I get a "connection refused".
On Tue, Dec 27, 2022 at 9:41 AM Deep Dish wrote:
> I bu
I built a net-new Quincy cluster (17.2.5) using ceph orch as follows:
2x mgrs
4x rgw
5x mon
4x rgw
5x mds
6x osd hosts w/ 10 drives each --> will be growing to 7 osd hosts in the
coming days.
I migrated all data from my legacy nautilus cluster (via rbd-mirror, rclone
for s3 buckets, etc.). All m
Hello.
I have a few issues with my ceph cluster:
- RGWs have disappeared from management (console does not register any
RGWs) despite showing 4 services deployed and processes running;
- All object buckets not accessible / manageable;
- Console showing some of my pools are “updating” – its
Hello.
I'm migrating from Nautilus -> Quincy. Data is being replicated between
clusters.
As data is migrated (currently about 60T), the Qunicy cluster repeatedly
doesn't seem to do a good job at balancing pgs across all OSD. Never had
this issue with Nautilus or other versions.
Running Quinc
19 matches
Mail list logo