Hi,
we have replaced some of our OSDs a while ago an while everything
recovery as planned, one PG is still stuck at active+clean+remapped with
no backfilling taking place.
Mpaaing the PG in question shows me that one OSD is missing:
$ ceph pg map 35.1fe
osdmap e1265760 pg 35.1fe (35.1fe) ->
Hi,
a lot of our OSD have crashed a few hours ago because of a failed assertion:
/build/ceph-15.2.3/src/osd/ECUtil.h: 34: FAILED ceph_assert(stripe_width
% stripe_size == 0)
Full output here:
https://pastebin.com/D1SXzKsK
All OSDs are on bluestore and run 15.2.3.
I think I messed up when I
g rid of all PGs the OSD were able to start again. Hope this
helps someone.
Regards,
Michael
Am 22.06.2020 um 19:46 schrieb Michael Fladischer:
Hi,
a lot of our OSD have crashed a few hours ago because of a failed
assertion:
/build/ceph-15.2.3/src/osd/ECUtil.h: 34: FAILED ceph
--"
echo " Show detail off the new create pool"
echo ""
sudo ceph osd pool get $pool all
Sylvain
-Message d'origine-
De : Michael Fladischer
Envoyé : 22 juin 2020 15:23
À : ceph-use
Am 24.06.2020 um 18:08 schrieb Marc Roos:
I can remember reading this before. I was hoping you maybe had some
setup with systemd scripts or maybe udev.
We use udev to disable write cache once a suitable disk is detected,
base on the MODEL_ID from udev environment:
ACTION=="add", SUBSYSTEM=="
Hi,
our cluster is on Octopus 15.2.4. We noticed that our MON all ran out of
space yesterday because the store.db folder kept growing until it filled
up the filesystem. We added more space to the MON nodes but store.db
keeps growing.
Right now it's ~220GiB on the two MON nodes that are activ
Thanks Peter!
Am 11.07.2020 um 06:13 schrieb Peter Woodman:
in the meanwhile, you can turn on compression in your mon's rocksdb
tunables and make things slightly less scary, something like:
mon_rocksdb_options =
write_buffer_size=33554432,compression=kLZ4Compression,level_compaction_dynamic_leve
Hi Andrei,
Am 03.08.2020 um 16:26 schrieb Andrei Mikhailovsky:
Module 'crash' has failed: dictionary changed size during iteration
I had the same error after upgrading to Octopus and I fixed it by
stopping all MGRs, removing /var/lib/ceph/crash/posted on all MGR nodes
(make a backup copy on
Hi,
I accidentally destroyed the wrong OSD in my cluster. It is now marked
as "destroyed" but the HDD is still there and data was not touch AFAICT.
I was able to avtivate it again using ceph-volume lvm activate and I can
make the OSD as "in" but it's status is not changing from "destroyed".
Hi Eugen,
Am 26.08.2020 um 11:47 schrieb Eugen Block:
I don't know if the ceph version is relevant here but I could undo that
quite quickly in my small test cluster (Octopus native, no docker).
After the OSD was marked as "destroyed" I recreated the auth caps for
that OSD_ID (marking as destroy
Hi,
Is it possible to remove an existing WAL device from an OSD? I saw that
ceph-bluestore-tool has a command bluefs-bdev-migrate, but it's not
clear to me if this can only move a WAL device or if it can be used to
remove it ...
Regards,
Michael
__
Hi Andreas,
Am 22.09.2020 um 22:35 schrieb Andreas John:
and then removing the journal
enough?
any hints on how to remove the journal?
Regards,
Michael
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le..
Hi Eugen,
Am 23.09.2020 um 14:51 schrieb Eugen Block:
I don't think there's a way to remove WAL/DB without rebuilding the OSD.
ceph-bluestore-tool bluefs-bdev-migrate expects a target device to
migrate the data since it's a migration. I can't read the full thread (I
get a server error), what i
Hi Igor,
Am 23.09.2020 um 18:38 schrieb Igor Fedotov:
bin/ceph-bluestore-tool --path dev/osd0 --devs-source dev/osd0/block.wal
--dev-target dev/osd0/block.db --command bluefs-bdev-migrate
Would this also work if the OSD only has its primary block device and
the separate WAL device? Like runni
14 matches
Mail list logo