Hey Cephers,
i was investigating some other issue, when I stumbled across this. I am
not sure, if this is "as intended" or faulty. This is a cephadm cluster
on reef 18.2.4, containerized with docker.
The ceph-crash module states that it cant find its key and that it cant
access RADOS.
Pre-
Hey there,
i've that problem too, although I got it from updating 17.2.7 to 18.2.4.
After i read this mail I've just fiddled around a bit aaand Prometheus
does not have ceph_osd_recovery_ops.
Then i looked into the files in
/var/lib/ceph/xyz/prometheus.node-name/etc/prometheus/prometheus.yml
Dear Ceph-users,
I have a problem that I'd like to have your input for.
Preface:
I have got a test-cluster and a productive-cluster. Both are setup the
same and both are having the same "issue". I am running Ubuntu 22.04 and
deployed ceph 17.2.3 via cephadm. Upgraded to 17.2.7 later on, which i
Hi Sake,
Do you have the config mgr/cephadm/secure_monitoring_stack to true? If
so, this pull request will fix your problem:
https://github.com/ceph/ceph/pull/58402
Regards,
Frank
Sake Ceph wrote:
After the upgrade from 17.2.7 to 18.2.4 a lot of graphs are empty. For example
the Osd laten
Hi,
When upgrading a cephadm deployed quincy cluster to reef, there will be
no ceph-exporter service launched.
Being new in reef (from release notes: ceph-exporter: Now the
performance metrics for Ceph daemons are exported by ceph-exporter,
which deploys on each daemon rather than using prom
Dear Cephers,
after a succession of unfortunate events, we have suffered a complete
datacenter blackout today.
Ceph _nearly_ perfectly came back up. The Health was OK and all services
were online, but we were having weird problems. Weird as in, we could
sometimes map rbds and sometimes not,
com/issues/64308.
We have worked around it by stripping the query parameters of OPTIONS
requests to the RGWs.
Nginx proxy config:
if ($request_method = OPTIONS) {
rewrite ^\/(.+)$ /$1? break;
}
Regards,
Reid
On Wed, Jun 5, 2024 at 12:10 PM mailing-lists
wrote:
OK, sorry for spam,
OK, sorry for spam, apparently this hasn't been working for a month...
Forget this mail. Sorry!
On 05.06.24 17:41, mailing-lists wrote:
Dear Cephers,
I am facing a Problem. I have updated our ceph cluster form 17.2.3 to
17.2.7 last week and i've just gotten complains about a we
Dear Cephers,
I am facing a Problem. I have updated our ceph cluster form 17.2.3 to
17.2.7 last week and i've just gotten complains about a website that is
not able to use s3 via CORS anymore. (GET works, PUT does not).
I am using cephadm and i have deployed 3 rgws + 2 ingress services.
The
Hey all,
im facing a "minor" problem.
I do not always get results when going to the dashboard, under
Block->Images in the tab Images or Namespaces. The little refresh button
will keep spinning and sometimes after several minutes it will finally
show something. That is odd, because from the sh
Dear Michael,
I don't have an explanation for your problem unfortunately, but I just
wondered that you experience a drop in performance, that this SSD
shouldn't have. Your SSDs drives (Samsung 870 EVO) should not get slower
on large writes. You can verify this on the post you've attached [1] o
7cc-90e2-c5cc96bdd825/osd-block-2a1d1bf0-300e-4160-ac55-047837a5af0b
and block.wal on
/dev/ceph-3a336b8e-ed39-4532-a199-ac6a3730840b/osd-wal-5d845dba-8b55-4984-890b-547fbdaff10c from
there, check if that device is well an LV member of the NVME device.
Can you share the full output of lsblk ?
Than
OK, attachments wont work.
See this:
https://filebin.net/t0p7f1agx5h6bdje
Best
Ken
On 01.02.23 17:22, mailing-lists wrote:
I've pulled a few lines from the log and i've attached this to this
mail. (I hope this works for this mailinglist?)
I found the line 135
[2023-01-26 16
recreation steps.
Thanks,
On Wed, 1 Feb 2023 at 10:13, mailing-lists
wrote:
Ah, nice.
service_type: osd
service_id: dashboard-admin-1661788934732
service_name: osd.dashboard-admin-1661788934732
placement:
host_pattern: '*'
spec:
data_devices:
bluestore
wal_devices:
model: Dell Ent NVMe AGN MU AIC 6.4TB
status:
created: '2022-08-29T16:02:22.822027Z'
last_refresh: '2023-02-01T09:03:22.853860Z'
running: 306
size: 306
Best
Ken
On 31.01.23 23:51, Guillaume Abrioux wrote:
On Tue, 31 Jan 2023 at 22:31, mai
? Did your db/wall device
show as having free space prior to the OSD creation?
On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote:
OK, the OSD is filled again. In and Up, but it is not using the nvme
WAL/DB anymore.
And it looks like the lvm group of the old osd is still on the nvme
drive. I co
? Did your db/wall device
show as having free space prior to the OSD creation?
On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote:
OK, the OSD is filled again. In and Up, but it is not using the nvme
WAL/DB anymore.
And it looks like the lvm group of the old osd is still on the nvme
drive. I co
dashboard).
Do you have a hint on how to fix this?
Best
Ken
On 30.01.23 16:50, mailing-lists wrote:
oph wait,
i might have been too impatient:
1/30/23 4:43:07 PM[INF]Deploying daemon osd.232 on ceph-a1-06
1/30/23 4:42:26 PM[INF]Found osd claims for drivegroup
dashboard-admin
dashboard-admin-1661788934732 -> {'ceph-a1-06': ['232']}
1/30/23 4:39:34 PM[INF]Found osd claims -> {'ceph-a1-06': ['232']}
1/30/23 4:39:34 PM[INF]Found osd claims -> {'ceph-a1-06': ['232']}
Although, it doesnt show the NVM
nderstand ramifications
before running any commands. :)
David
On Mon, Jan 30, 2023, at 04:24, mailing-lists wrote:
# ceph orch osd rm status
No OSD remove/replace operations reported
# ceph orch osd rm 232 --replace
Unable to find OSDs: ['232']
It is not finding 232 anymore. It is
filling to the other OSDs for the PGs that were
on the failed disk?
David
On Fri, Jan 27, 2023, at 03:25, mailing-lists wrote:
Dear Ceph-Users,
i am struggling to replace a disk. My ceph-cluster is not replacing the
old OSD even though I did:
ceph orch osd rm 232 --replace
The OSD 232 is st
Dear Ceph-Users,
i am struggling to replace a disk. My ceph-cluster is not replacing the
old OSD even though I did:
ceph orch osd rm 232 --replace
The OSD 232 is still shown in the osd list, but the new hdd will be
placed as a new OSD. This wouldnt mind me much, if the OSD was also
placed o
Dear Ceph-Users,
i am struggling to replace a disk. My ceph-cluster is not replacing the
old OSD even though I did:
ceph orch osd rm 232 --replace
The OSD 232 is still shown in the osd list, but the new hdd will be
placed as a new OSD. This wouldnt mind me much, if the OSD was also
placed o
VERY least the number of
OSDs it lives on, rounded up to the next power of 2. I’d probably go for at
least (2x#OSD) rounded up. If you have two few, your metadata operations will
contend with each other.
On Nov 3, 2022, at 10:24, mailing-lists wrote:
Dear Ceph'ers,
I am wondering o
Dear Ceph'ers,
I am wondering on how to choose the number of PGs for a RBD-EC-Pool.
To be able to use RBD-Images on a EC-Pool, it needs to have an regular
RBD-replicated-pool, as well as an EC-Pool with EC overwrites enabled,
but how many PGs would you need for the RBD-replicated-pool. It does
Dear Ceph-Users,
i've recently setup a 4.3P Ceph-Cluster with cephadm.
I am seeing that the health is ok, as seen here:
ceph -s
cluster:
id: 8038f0xxx
health: HEALTH_OK
services:
mon: 5 daemons, quorum
ceph-a2-07,ceph-a1-01,ceph-a1-10,ceph-a2-01,ceph-a1-05 (age 3w)
mg
I want to update my mimic cluster to the latest minor version using the
rolling-update script of ceph-ansible. The cluster was rolled out with that
setup.
So as long as ceph_stable_release stays on the current installed version
(mimic) the rolling update script will do only a minor update.
I
Hi Joe and Mehmet!
Thanks for your responses!
The requested outputs at the end of the message.
But to make my question more clear:
What we are actually after, is not about CURRENT usage of our OSDs, but
stats on total GBs written in the cluster, per OSD, and read/write ratio.
With those num
Hi,
We would like to replace the current seagate ST4000NM0034 HDDs in our
ceph cluster with SSDs, and before doing that, we would like to checkout
the typical usage of our current drives, over the last years, so we can
select the best (price/performance/endurance) SSD to replace them with.
I
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
30 matches
Mail list logo