On 05/12/17 17:10, Graham Allan wrote:
> On 12/05/2017 07:20 AM, Wido den Hollander wrote:
>> Hi,
>>
>> I haven't tried this before but I expect it to work, but I wanted to
>> check before proceeding.
>>
>> I have a Ceph cluster which is running with manually formatted
>> FileStore XFS disks, Jewel, sysvinit and Ubuntu 14.04.
>>
>> I would like to upgrade this system to Luminous, but since I have to
>> re-install all servers and re-format all disks I'd like to move it to
>> BlueStore at the same time.
> 
> You don't *have* to update the OS in order to update to Luminous, do you? 
> Luminous is still supported on Ubuntu 14.04 AFAIK.
> 
> Though obviously I understand your desire to upgrade; I only ask because I am 
> in the same position (Ubuntu 14.04, xfs, sysvinit), though happily with a 
> smaller cluster. Personally I was planning to upgrade ours entirely to 
> Luminous while still on Ubuntu 14.04, before later going through the same 
> process of decommissioning one machine at a time to reinstall with CentOS 7 
> and Bluestore. I too don't see any reason the mixed Jewel/Luminous cluster 
> wouldn't work, but still felt less comfortable with extending the upgrade 
> duration.
> 
> Graham

Yes, you can run luminous on Trusty; one of my clusters is currently 
Luminous/Bluestore/Trusty as I've not had time to sort out doing OS upgrades on 
it. I second the suggestion that it would be better to do the luminous upgrade 
first, retaining existing filestore OSDs, and then do the OS upgrade/OSD 
recreation on each node in sequence. I don't think there should realistically 
be any problems with running a mixed cluster for a while but doing the 
jewel->luminous upgrade on the existing installs first shouldn't be significant 
extra effort/time as you're already predicting at least two months to upgrade 
everything, and it does minimise the amount of change at any one time in case 
things do start going horribly wrong.

Also, at 48 nodes, I would've thought you could get away with cycling more than 
one of them at once. Assuming they're homogenous taking out even 4 at a time 
should only raise utilisation on the rest of the cluster to a little over 65%, 
which still seems safe to me, and you'd waste way less time waiting for 
recovery. (I recognise that depending on the nature of your employment 
situation this may not actually be desirable...)

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to