I had several kernel mapped rbds as well as ceph-fuse mounted CephFS
clients when I upgraded from Jewel to Luminous. I rolled out the client
upgrades over a few weeks after the upgrade. I had tested that the client
use cases I had would be fine running Jewel connecting to a Luminous
cluster so ther
The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has
the RBD mounted … So there is nor migration from my POV as there is no second
storage to migrate to ...
All your pain is self-inflicted.
Just FYI clients are not interrupted when you upgrade ceph. Client will
be
> Am 03.04.2018 um 13:31 schrieb Konstantin Shalygin :
>
>> and true the VMs have to be shut down/server rebooted
>
>
> Is not necessary. Just migrate VM.
Hi,
The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has
the RBD mounted … So there is nor migration from my PO
and true the VMs have to be shut down/server rebooted
Is not necessary. Just migrate VM.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Robert,
> Am 29.03.2018 um 10:27 schrieb Robert Sander :
>
> On 28.03.2018 11:36, Götz Reinicke wrote:
>
>> My question is: How to proceed with the serves which map the rbds?
>
> Do you intend to upgrade the kernels on these RBD clients acting as NFS
> servers?
>
> If so you have to plan a
On 28.03.2018 11:36, Götz Reinicke wrote:
> My question is: How to proceed with the serves which map the rbds?
Do you intend to upgrade the kernels on these RBD clients acting as NFS
servers?
If so you have to plan a reboot anyway. If not, nothing changes.
Or are you using qemu+rbd in userspace
Hi, I bet I did read it somewhere already, but can’t remember where….
Our ceph 10.2. cluster is fin and healthy and I have a couple of rbds exported
to some fileserver and a nfs server.
The upgrade to V 12.2 documentation is clear regarding upgrading/restarting all
MONs first, after that, the O