Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-06 Thread Marc Roos
> > > > > > > The adviced solution is to upgrade ceph only in HEALTH_OK state. And I > > also read somewhere that is bad to have your cluster for a long time in > > an HEALTH_ERR state. > > > > But why is this bad? > > Aside from the obvious (errors are bad things!), many people have > extern

Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-06 Thread Marc Roos
Thanks interesting to read. So in luminous it is not really a problem. I was expecting to get into trouble with the monitors/mds. Because my failover takes quite long, and thought it was related to the damaged pg Luminous: "When the past intervals tracking structure was rebuilt around exactly t

Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-05 Thread Sean Purdy
On Wed, 5 Sep 2018, John Spray said: > On Wed, Sep 5, 2018 at 8:38 AM Marc Roos wrote: > > > > > > The adviced solution is to upgrade ceph only in HEALTH_OK state. And I > > also read somewhere that is bad to have your cluster for a long time in > > an HEALTH_ERR state. > > > > But why is this ba

Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-05 Thread John Spray
On Wed, Sep 5, 2018 at 8:38 AM Marc Roos wrote: > > > The adviced solution is to upgrade ceph only in HEALTH_OK state. And I > also read somewhere that is bad to have your cluster for a long time in > an HEALTH_ERR state. > > But why is this bad? Aside from the obvious (errors are bad things!), m

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-07 Thread David Turner
I had several kernel mapped rbds as well as ceph-fuse mounted CephFS clients when I upgraded from Jewel to Luminous. I rolled out the client upgrades over a few weeks after the upgrade. I had tested that the client use cases I had would be fine running Jewel connecting to a Luminous cluster so ther

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Konstantin Shalygin
The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor migration from my POV as there is no second storage to migrate to ... All your pain is self-inflicted. Just FYI clients are not interrupted when you upgrade ceph. Client will be

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
> Am 03.04.2018 um 13:31 schrieb Konstantin Shalygin : > >> and true the VMs have to be shut down/server rebooted > > > Is not necessary. Just migrate VM. Hi, The VMs are XenServer VMs with virtual Disk saved at the NFS Server which has the RBD mounted … So there is nor migration from my PO

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Konstantin Shalygin
and true the VMs have to be shut down/server rebooted Is not necessary. Just migrate VM. k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-04-03 Thread Götz Reinicke
Hi Robert, > Am 29.03.2018 um 10:27 schrieb Robert Sander : > > On 28.03.2018 11:36, Götz Reinicke wrote: > >> My question is: How to proceed with the serves which map the rbds? > > Do you intend to upgrade the kernels on these RBD clients acting as NFS > servers? > > If so you have to plan a

Re: [ceph-users] Upgrading ceph and mapped rbds

2018-03-29 Thread Robert Sander
On 28.03.2018 11:36, Götz Reinicke wrote: > My question is: How to proceed with the serves which map the rbds? Do you intend to upgrade the kernels on these RBD clients acting as NFS servers? If so you have to plan a reboot anyway. If not, nothing changes. Or are you using qemu+rbd in userspace

Re: [ceph-users] Upgrading Ceph

2016-02-01 Thread david
Hi Vlad, I just upgraded my ceph cluster from firefly to hammer and all right. Please do it according to the manuals in www.ceph.com and restart monitors and then osds. I restarted osds one by one, which means restart one OSD and waits for it runs normal and then r

Re: [ceph-users] Upgrading Ceph

2016-02-01 Thread Ben Hines
Upgrades have been easy for me, following the steps. I would say to be careful not to 'miss' one OSD, or forget to restart it after updating, since having an OSD on a different version than the rest of the cluster for too long during an upgrade started to cause issues when i missed one once. -Ben

Re: [ceph-users] Upgrading Ceph

2016-02-01 Thread Vlad Blando
What if the upgrade fails, what is the rollback scenario? On Wed, Jan 27, 2016 at 10:10 PM, wrote: > I just upgraded from firefly to infernalis (firefly to hammer to > infernalis) my cluster > All came like a charm > I upgraded mon first, then the osd, one by one, restarting the daemon > after

Re: [ceph-users] Upgrading Ceph

2016-01-27 Thread Eneko Lacunza
Hi, El 27/01/16 a las 15:00, Vlad Blando escribió: I have a production Ceph Cluster - 3 nodes - 3 mons on each nodes - 9 OSD @ 4TB per node - using ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6) ​Now I want to upgrade it to Hammer, I saw the documentation on upgrading, it look

Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04

2014-06-21 Thread Uwe Grohnwaldt
users@lists.ceph.com > Subject: Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04 > > Well as mentioned, I do not want to upgrade operating system. Can we not > run ceph 0.79 on 12.04 Ubuntu ? > > On Jun 21, 2014 1:54 PM, "Uwe Grohnwaldt" <mailto:u...@grohnwaldt

Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04

2014-06-21 Thread Shesha Sreenivasamurthy
Well as mentioned, I do not want to upgrade operating system. Can we not run ceph 0.79 on 12.04 Ubuntu ? On Jun 21, 2014 1:54 PM, "Uwe Grohnwaldt" wrote: > Hi, > > best way to upgrade: use official ceph repository. It has firefly (0.80.1) > for precise. (http://ceph.com/docs/master/install/get-pa

Re: [ceph-users] Upgrading Ceph 0.72 to 0.79 on Ubuntu 12.04

2014-06-21 Thread Uwe Grohnwaldt
Hi, best way to upgrade: use official ceph repository. It has firefly (0.80.1) for precise. (http://ceph.com/docs/master/install/get-packages/) Moreover you should install the trusty-kernel (linux-generic-lts-trusty) Mit freundlichen Grüßen / Best Regards, -- Consultant Dipl.-Inf. Uwe Grohnwald

Re: [ceph-users] Upgrading ceph

2014-02-25 Thread Pavel V. Kaygorodov
25 февр. 2014 г., в 14:13, Srinivasa Rao Ragolu написал(а): > always better to have same version in all the nodes of cluster to avoid > integration issues rule out. But, while updating, some nodes will run on older version for a some period. Is this ok? Pavel. > On Tue, Feb 25, 2014 at

Re: [ceph-users] Upgrading ceph

2014-02-24 Thread Srinivasa Rao Ragolu
Yes Sahana, *First of all uninstall ceph packages from your node.* *then* *Approach for rpm based:* You just open /etc/yum.repos.d/ceph.repo Replace the {ceph-stable-release} with emperor and {distro} with rpm based distro baseurl=http://ceph.com/rpm-{ceph-stable-release}/{distro}/noarch *Now