Is there at least a way to blacklist certain devices, so ceph won't try to
add them ?
Or should I completely disable ceph orch to stop it trying to add new
devices all the time?
On Mon, Jan 31, 2022 at 9:59 AM Ricardo Alonso
wrote:
> Hey all,
>
> For more that I'm enjoying this discussion, it's
Hey all,
For more that I'm enjoying this discussion, it's completely out of my
original question:
How to stop the automatic OSD creation from Ceph orchestrator?
The problem happens because using cinderlib, ovirt uses krbd (not librbd)
and because of this, the kernel
Hi,
> On 31 Jan 2022, at 11:38, Marc wrote:
>
> This is incorrect. I am using live migration with Nautilus and stock kernel
> on CentOS7
Mark, I think that you are confusing live migration of virtual machines [1] and
live migration of RBD images [2] inside the cluster (between pools, for
ex
>
> > On 31 Jan 2022, at 00:53, Nir Soffer wrote:
> >
> > Live migration and snapshots are not available? This is news to me.
> >
>
>
> Welcome to krbd world.
This is incorrect. I am using live migration with Nautilus and stock kernel on
CentOS7
___
Hi,
> On 31 Jan 2022, at 00:53, Nir Soffer wrote:
>
> Live migration and snapshots are not available? This is news to me.
>
Welcome to krbd world. But there will be no fun when you need to update the rbd
driver for the entire aggregate with migration and reboot for each host
> Why do you ne
It's a shame to not see ovirt fully integrated with Ceph. Even Proxmox can
do it. I also understand those limitations on ceph/ovirt usage, but I
believe those small issues can be overtaken. I am still hoping to see a
better integration.
Does anyone know who to make ceph stop trying to add the rbd
Hi,
The oVirt Storage team just drop old Cinder integration, and make cinderlib
integration (MBS) without librbd support
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1997241
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2027719
Features like live-migration, easy Ceph version updates and, d