Hi list,
I'm in the middle of an OpenStack migration (obviously Ceph backed) and
stumble into some huge virtual machines.
To ensure downtime is kept to a minimum, I'm thinking of using Ceph's
snapshot features using rbd export-diff and import-diff.
However, is it safe (or even supported) to do t
Hi Konstantin,
Thank you very much. That's a good question.
The implementations of OpenStack and Ceph and "the other" OpenStack and
Ceph are, apart from networking, completely separate.
In terms of OpenStack I can recreate the compute instances and storage
volumes but obviously need to copy the
Hi Anthony,
Thanks as well.
Well, it's a one-time job.
K.
On 14-05-2020 09:10, Anthony D'Atri wrote:
> Why not use rbd-mirror to handle the volumes?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@cep
I need to mirror single RBDs while rbd-mirror: "mirroring is configured
on a per-pool basis" (according documentation).
On 14-05-2020 09:13, Anthony D'Atri wrote:
> So?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to cep
Thanks all, I'm going to investigate rbd-mirror further.
K.
On 14-05-2020 09:30, Anthony D'Atri wrote:
> It’s entirely possible — and documented — to mirror individual images. Your
> proposal to use snapshots is reinventing the wheel, but with less efficiency.
>
> https://docs.ceph.com/docs/nau
Hi list,
I'm figuring if we would like rbd_store_chunk_size = 4 or
rbd_store_chunk_size = 8 (or maybe something different) on our new
OpenStack / Ceph environment.
Any opinions on this matter?
Cheers,
Kees
--
https://nefos.nl/contact
Nefos IT bv
Ambachtsweg 25 (industrienummer 4217)
5627 BZ E
Hi Robert,
As long as you triple-check permissions on the cache tier (should be the
same as your actual storage pool) you should be fine.
In our setup I applied this a few times. The first time I made the
assumption permissions would be inherited or not applicable but IOPs get
redirected towards
Hi,
This is a chicken and egg problem I guess. The boot process (albeit UEFI
or BIOS; given x86) should be able to load boot loader code, a Linux
kernel and initial RAM disk (although in some cases a kernel alone could
be enough).
So yes: use PXE to load a Linux kernel and RAM disk. The RAM
Hi list,
Thanks again for pointing me towards rbd-mirror!
I've read documentation, old mailing list posts, blog posts and some
additional guides. Seems like the tool to help me through my data migration.
Given one-way synchronisation and image-based (so, not pool based)
configuration, it's still
Hi Anthony,
A one-way mirror suits fine in my case (the old cluster will be
dismantled in mean time) so I guess a single rbd-mirror daemon should
suffice.
The pool consists of OpenStack Cinder volumes containing a UUID (i.e.
volume-ca69183a-9601-11ea-8e82-63973ea94e82 and such). The change of
con
Thanks for clearing that up, Jason.
K.
On 14-05-2020 20:11, Jason Dillaman wrote:
> rbd-mirror can only remove images that (1) have mirroring enabled and
> (2) are not split-brained with its peer. It's totally fine to only
> mirror a subset of images within a pool and it's fine to only mirror
> o
11 matches
Mail list logo