Hi,
more and more OSDs now crash all the time and I've lost more OSDs than
my replication allows, all my data is currently down or inactive.
Can somebody help me fix those asserts and get them up again (so i can
start my distaster recovery backup)?
$ sudo /usr/bin/ceph-osd -f --cluster ceph --id
Hi Nicola,
might be
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon
or https://tracker.ceph.com/issues/57348.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Nic
Hi Joseph,
thank you for answer. But if I'm looking correctly to 'ceph osd df'
output I posted I see there are about 195 PGs per OSD.
There are 608 OSDs in the pool, which is the only data pool. What I have
calculated - PG calc says that PG number is fine.
On 11/1/22 14:03, Joseph Mundacka
Dear Ceph Users,
Two weeks ago, the Ceph project conducted a user survey to understand how
people are using their Ceph clusters in the wild
The results are summarized in this blog post!
https://ceph.io/en/news/blog/2022/ceph-use-case-survey-2022/
We received quite a few interesting use cases tha
Dear Ceph users,
I have one PG in my cluster that is constantly in active+clean+remapped
state. From what I understand there might a problem with the up set:
# ceph pg map 3.5e
osdmap e23638 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647]
acting [38,78,55,49,40,39,64,68]
The last OSD
Hi Eugen,
the PG merge finished and I still observe that no PG warning shows up. We have
mgr advanced mon_max_pg_per_osd
300
and I have an OSD with 306 PGs. Still, no warning:
# ceph health detail
HEALTH_OK
Is this not checked per OSD? This wo
I had to check logs once again to find out what I exactly did...
1. Destroyed 2 OSDs from host pirat and recreated them, but backfilling
was still in progress:
2022-10-26T13:22:13.744545+0200 mgr.skarb (mgr.40364478) 93039 : cluster
[DBG] pgmap v94205: 285 pgs: 2
active+undersized+degraded+
Hi all,
i am planning to set up on my ceph cluster an RBD pool for virtual
machines created on my Cloudstack environment. In parallel, a Ceph FS
pool should be used as a secondary storage for VM snapshots, ISOs etc.
Are there any performance issues when using both RBD and CephFS or is it
bett
Hi,
Did you find a workaround or explanation by any chance ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
So I guess, that if max PGs per OSD was an issue, the problem should
appear right after creating new pool, am I right?
it would happen right after removing or adding OSDs (btw, the default
is 250 PGs/OSD). But with only around 400 PG and assuming a pool size
of 3 you shouldn't be faci
No, I couldn't find anything odd in osd.2 log, but I'm not very familiar
with ceph so it's likely I missed something.
Did I hit the 300PGs/osd limit? I'm not sure since I can't find any log
entry about it, and I don't know how to calculate PGs count on that OSD
for that moment.
One thing whi
11 matches
Mail list logo