Glad to be of help, and thanks for confirming the 'lvm migrate' works.
Cheers,
Chris
On Sun, Jun 30, 2024 at 05:23:42PM +0800, Gregory Orange wrote:
Hello Chris, Igor,
I came here to say two things.
Firstly, thank you for this thread. I've not run perf dump or bluefs
stats before and found
On Mon, Dec 11, 2023 at 05:27:53PM +1100, duluxoz wrote:
Hi All,
I find myself in the position of having to change the k/m values on an
ec-pool. I've discovered that I simply can't change the ec-profile,
but have to create a "new ec-profile" and a "new ec-pool" using the
new values, then migr
Hi Igor,
The immediate answer is to use "ceph-volume lvm zap" on the db LV after
running the migrate. But for the longer term I think the "lvm zap" should
be included in the "lvm migrate" process.
I.e. this works to migrate a separate wal/db to the block device:
#
# WARNING! DO NOT ZAP AFTER
Hi Igor,
On Wed, Nov 15, 2023 at 12:30:57PM +0300, Igor Fedotov wrote:
Hi Chris,
haven't checked you actions thoroughly but migration to be done on a
down OSD which is apparently not the case here.
May be that's a culprit and we/you somehow missed the relevant error
during the migration pro
Hi,
What's the correct way to migrate an OSD wal/db from a fast device to the
(slow) block device?
I have an osd with wal/db on a fast LV device and block on a slow LV
device. I want to move the wal/db onto the block device so I can
reconfigure the fast device before moving the wal/db back t
Hi Igor,
Thanks for the suggestions. You may have already seen my followup message
where the solution was to use "ceph-bluestore-tool bluefs-bdev-migrate" to
get the lingering 128KiB of data moved from the slow to the fast device. I
wonder if your suggested "ceph-volume lvm migrate" would do t
On Fri, Oct 06, 2023 at 02:55:22PM +1100, Chris Dunlop wrote:
Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving
bluefs spillover warnings (due to the "fix spillover alert" per the
16.2.14 release notes). E.g. fr
Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving bluefs
spillover warnings (due to the "fix spillover alert" per the 16.2.14
release notes). E.g. from 'ceph health detail', the warning on one of
these (there are a few):
osd.76 spi
On Wed, Apr 05, 2023 at 01:18:57AM +0200, Mikael Öhman wrote:
Trying to upgrade a containerized setup from 16.2.10 to 16.2.11 gave us two
big surprises, I wanted to share in case anyone else encounters the same. I
don't see any nice solution to this apart from a new release that fixes the
perform
On Sun, Feb 12, 2023 at 20:24 Chris Dunlop wrote:
Is this "sawtooth" pattern of remapped pgs and misplaced objects a normal
consequence of adding OSDs?
On Sun, Feb 12, 2023 at 10:02:46PM -0800, Alexandre Marangone wrote:
This could be the pg autoscaler since you added new OSDs. Y
Hi,
ceph-16.2.9
I've added some new osds - some added to existing hosts and some on
newly-commissioned hosts. The new osds were added to the data side of an
existing EC 8.3 pool.
I've been waiting for the system to finish remapping / backfilling for
some time. Originally the number of remap
n Mon, Oct 24, 2022 at 02:41:07PM +0200, Maged Mokhtar wrote:
On 18/10/2022 01:24, Chris Dunlop wrote:
Hi,
Is there anywhere that describes exactly how rbd data (including
snapshots) are stored within a pool?
Hi Chris,
snaphots are stored on the same OSD as current object.
rbd snapshots are se
Hi,
Is there anywhere that describes exactly how rbd data (including
snapshots) are stored within a pool?
I can see how a rbd broadly stores its data in rados objects in the
pool, although the object map is opaque. But once an rbd snap is created
and new data written to the rbd, where is the
On Fri, Sep 09, 2022 at 11:14:41AM +1000, Chris Dunlop wrote:
What can make a "rbd unmap" fail, assuming the device is not mounted
and not (obviously) open by any other processes?
I have multiple XFS on rbd filesystems, and often create rbd
snapshots, map and read-only mount th
On Fri, Sep 09, 2022 at 11:14:41AM +1000, Chris Dunlop wrote:
What can make a "rbd unmap" fail, assuming the device is not mounted
and not (obviously) open by any other processes?
I have multiple XFS on rbd filesystems, and often create rbd
snapshots, map and read-only mount th
What can make a "rbd unmap" fail, assuming the device is not mounted and
not (obviously) open by any other processes?
I have multiple XFS on rbd filesystems, and often create rbd snapshots,
map and read-only mount the snapshot, perform some work on the fs, then
unmount and unmap. The unmap reg
On Wed, Dec 15, 2021 at 02:05:05PM +1000, Michael Uleysky wrote:
I try to upgrade three-node nautilus cluster to pacific. I am updating ceph
on one node and restarting daemons. OSD ok, but monitor cannot enter quorum.
Sounds like the same thing as:
Pacific mon won't join Octopus mons
https://t
Hi,
Is there any way of using "ceph orch apply osd" to partition an SSD as
wal+db for a HDD OSD, with the rest of the SSD as a separate OSD?
E.g. on a machine (here called 'k1') with a small boot drive and a single
HDD and SSD, this will create an OSD on the HDD, with wal+db on a 60G
logical
Hi Sebastian,
On Thu, Sep 02, 2021 at 11:21:07AM +0200, Sebastian Wagner wrote:
On Mon, Aug 30, 2021 at 03:52:29PM +1000, Chris Dunlop wrote:
I'm stuck, mid upgrade from octopus to pacific using cephadm, at
the point of upgrading the mons.
Could you please verify that the mon_map of eac
Hi,
Does anyone have any suggestions?
Thanks,
Chris
On Mon, Aug 30, 2021 at 03:52:29PM +1000, Chris Dunlop wrote:
Hi,
I'm stuck, mid upgrade from octopus to pacific using cephadm, at the
point of upgrading the mons.
I have 3 mons still on octopus and in quorum. When I try to bring
Hi,
I'm stuck, mid upgrade from octopus to pacific using cephadm, at the point
of upgrading the mons.
I have 3 mons still on octopus and in quorum. When I try to bring up a
new pacific mon it stays permanently in "probing" state.
The pacific mon is running off:
docker.io/ceph/ceph@sha256:8
Hi Frank,
I suggest you should file the ticket as you have the full story and the
use case to go with it.
I'm just an interested bystander, I just happened to know a little about
this area because of a filestore to bluestore migration I'd done recently.
Cheers,
Chris
On Fri, Mar 12, 2021
Hi Frank,
I agree there's a problem there. Howewever, to clarify: the json file
already contains the /dev/sdq1 path (at data:path) and the "simple activate"
is just reading the file. I.e. the problem lies with the json file creator,
which was the "ceph-volume simple scan" step.
For fix your
Hi Frank,
On Tue, Mar 02, 2021 at 02:58:05PM +, Frank Schilder wrote:
Hi all,
this is a follow-up on "reboot breaks OSDs converted from ceph-disk to ceph-volume
simple".
I converted a number of ceph-disk OSDs to ceph-volume using "simple scan" and
"simple activate". Somewhere along the w
On Thu, Jan 21, 2021 at 07:52:00PM -0500, Jason Dillaman wrote:
On Thu, Jan 21, 2021 at 6:18 PM Chris Dunlop wrote:
On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT
On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
Hi,
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
it just "this is crazy large, if you're trying to go over this you're
doing something wrong, re
Hi,
What limits are there on the "reasonable size" of an rbd?
E.g. when I try to create a 1 PB rbd with default 4 MiB objects on my
octopus cluster:
$ rbd create --size 1P --data-pool rbd.ec rbd.meta/fs
2021-01-20T18:19:35.799+1100 7f47a99253c0 -1 librbd::image::CreateRequest:
validate_layou
Just to follow up...
I was able to use a per-osd device copy to migrate nearly 80 FileStore
osds to BlueStore using a version of the script in:
https://tracker.ceph.com/issues/47839
Cheers,
Chris
On Fri, Oct 09, 2020 at 12:05:32PM +1100, Chris Dunlop wrote:
Hi,
The docs have scant detail
except for '--op
dup' but it's not really clear what exactly that does. I figured it could be something
like "duplicate" and tried a couple of different approaches but none of them
succeeded.
Maybe someone else has more insights.
Zitat von Chris Dunlop :
Hi Eugen,
Remin
Hi Eugen,
Reminder: I'm looking for guidance / hints on how to migrate from
filestore to bluestore using a "per-osd device copy":
https://docs.ceph.com/en/latest/rados/operations/bluestore-migration/#per-osd-device-copy
On Fri, Oct 09, 2020 at 07:03:33AM +, Eugen Block wrote:
I think by "
Hi,
The docs have scant detail on doing a migration to bluestore using a
per-osd device copy:
https://docs.ceph.com/en/latest/rados/operations/bluestore-migration/#per-osd-device-copy
This mentions "using the copy function of ceph-objectstore-tool", but
ceph-objectstore-tool doesn't have a c
31 matches
Mail list logo