The mailing list strips attachments you'll need to post it on an external site.


> On Jan 8, 2016, at 11:00 AM, Yiping Zhang <[email protected]> wrote:
> 
> Hm  I definitely had included the attachment (as I can see from my Outlook’s 
> sent folder that the message has an attachment).
> 
> I am wondering if the mailing list requires any special privilege to send 
> messages with attachment. Does anyone know ?  I can forward the doc to 
> someone who has the power to resend it to the list if anyone volunteers to do 
> so.
> 
> To answer Nux!’s suggestion of doing a blog, but I am old fashioned guy that 
> I have not tried that yet :)
> 
> Yiping
> 
> 
> 
> 
>> On 1/8/16, 10:35 AM, "Davide Pala" <[email protected]> wrote:
>> 
>> I think you've forget the attachment ...
>> 
>> 
>> 
>> Inviato dal mio dispositivo Samsung
>> 
>> 
>> -------- Messaggio originale --------
>> Da: Yiping Zhang <[email protected]>
>> Data: 08/01/2016 18:44 (GMT+01:00)
>> A: [email protected]
>> Oggetto: Re: A Story of a Failed XenServer Upgrade
>> 
>> 
>> See attached pdf document. This is the final procedure we adopted after 
>> upgrading seven XenServer pools.
>> 
>> Yiping
>> 
>> 
>> 
>> 
>> 
>>> On 1/8/16, 2:20 AM, "Alessandro Caviglione" <[email protected]> wrote:
>>> 
>>> Hi Yiping,
>>> yes, thank you very much!!
>>> Please share the doc so I can try again the upgrade process and see if it
>>> was only a "unfortunate coincidence of events" or a wrong upgrade process.
>>> 
>>> Thanks!
>>> 
>>>> On Fri, Jan 8, 2016 at 10:20 AM, Nux! <[email protected]> wrote:
>>>> 
>>>> Yiping,
>>>> 
>>>> Why not make a blog post about it so everyone can benefit? :)
>>>> 
>>>> Lucian
>>>> 
>>>> --
>>>> Sent from the Delta quadrant using Borg technology!
>>>> 
>>>> Nux!
>>>> www.nux.ro
>>>> 
>>>> ----- Original Message -----
>>>>> From: "Yiping Zhang" <[email protected]>
>>>>> To: [email protected], [email protected]
>>>>> Sent: Friday, 8 January, 2016 01:31:21
>>>>> Subject: Re: A Story of a Failed XenServer Upgrade
>>>> 
>>>>> Hi, Alessandro
>>>>> 
>>>>> Late to the thread.  Is this still an issue for you ?
>>>>> 
>>>>> I went thru this process before and I have a step by step document that
>>>> I can
>>>>> share if you still need it.
>>>>> 
>>>>> Yiping
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> On 1/2/16, 4:43 PM, "Ahmad Emneina" <[email protected]> wrote:
>>>>>> 
>>>>>> Hi Alessandro,
>>>>>> Without seeing the logs, or DB, it will be hard to diagnose the issue.
>>>> I've
>>>>>> seen something similar in the past, where the XenServer host version isnt
>>>>>> getting updated in the DB, as part of the XS upgrade process. That caused
>>>>>> CloudStack to use the wrong hypervisor resource to try connecting back to
>>>>>> the XenServers... ending up in failure. If you could share sanitized
>>>>>> versions of your log and db, someone here might be able to give you the
>>>>>> necessary steps to get your cluster back under CloudStack control.
>>>>>> 
>>>>>> On Sat, Jan 2, 2016 at 1:27 PM, Alessandro Caviglione <
>>>>>> [email protected]> wrote:
>>>>>> 
>>>>>>> No guys,as the article wrote, my first action was to put in Maintenance
>>>>>>> Mode the Pool Master INSIDE CS; "It is vital that you upgrade the
>>>> XenServer
>>>>>>> Pool Master first before any of the Slaves.  To do so you need to
>>>> empty the
>>>>>>> Pool Master of all CloudStack VMs, and you do this by putting the Host
>>>> into
>>>>>>> Maintenance Mode within CloudStack to trigger a live migration of all
>>>> VMs
>>>>>>> to alternate Hosts"
>>>>>>> 
>>>>>>> This is exactly what I've done and after the XS upgrade, no hosts was
>>>> able
>>>>>>> to communicate with CS and also with the upgraded host.
>>>>>>> 
>>>>>>> Putting an host in Maint Mode within CS will trigger MM also on
>>>> XenServer
>>>>>>> host or just will move the VMs to other hosts?
>>>>>>> 
>>>>>>> And again.... what's the best practices to upgrade a XS cluster?
>>>>>>> 
>>>>>>> On Sat, Jan 2, 2016 at 7:11 PM, Remi Bergsma <
>>>> [email protected]>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> CloudStack should always do the migration of VM's not the Hypervisor.
>>>>>>>> 
>>>>>>>> That's not true. You can safely migrate outside of CloudStack as the
>>>>>>> power
>>>>>>>> report will tell CloudStack where the vms live and the db gets
>>>> updated
>>>>>>>> accordingly. I do this a lot while patching and that works fine on
>>>> 6.2
>>>>>>> and
>>>>>>>> 6.5. I use both CloudStack 4.4.4 and 4.7.0.
>>>>>>>> 
>>>>>>>> Regards, Remi
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Sent from my iPhone
>>>>>>>> 
>>>>>>>> On 02 Jan 2016, at 16:26, Jeremy Peterson <[email protected]
>>>> <mailto:
>>>>>>>> [email protected]>> wrote:
>>>>>>>> 
>>>>>>>> I don't use XenServer maintenance mode until after CloudStack has
>>>> put the
>>>>>>>> Host in maintenance mode.
>>>>>>>> 
>>>>>>>> When you initiate maintenance mode from the host rather than
>>>> CloudStack
>>>>>>>> the db does not know where the VM's are and your UUID's get jacked.
>>>>>>>> 
>>>>>>>> CS is your brains not the hypervisor.
>>>>>>>> 
>>>>>>>> Maintenance in CS.  All VM's will migrate.  Maintenance in XenCenter.
>>>>>>>> Upgrade.  Reboot.  Join Pool.  Remove Maintenance starting at
>>>> hypervisor
>>>>>>> if
>>>>>>>> needed and then CS and move on to the next Host.
>>>>>>>> 
>>>>>>>> CloudStack should always do the migration of VM's not the Hypervisor.
>>>>>>>> 
>>>>>>>> Jeremy
>>>>>>>> 
>>>>>>>> 
>>>>>>>> -----Original Message-----
>>>>>>>> From: Davide Pala [mailto:[email protected]]
>>>>>>>> Sent: Friday, January 1, 2016 5:18 PM
>>>>>>>> To: [email protected]<mailto:[email protected]>
>>>>>>>> Subject: R: A Story of a Failed XenServer Upgrade
>>>>>>>> 
>>>>>>>> Hi alessandro. If u put in maintenance mode the master you force the
>>>>>>>> election of a new pool master. Now when you have see the upgraded
>>>> host as
>>>>>>>> disconnected you are connected to the new pool master and the host
>>>> (as a
>>>>>>>> pool member) cannot comunicate with a pool master of an earliest
>>>> version.
>>>>>>>> The solution? Launche the upgrade on the pool master without enter in
>>>>>>>> maintenance mode. And remember a consistent backup!!!
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Inviato dal mio dispositivo Samsung
>>>>>>>> 
>>>>>>>> 
>>>>>>>> -------- Messaggio originale --------
>>>>>>>> Da: Alessandro Caviglione <[email protected]<mailto:
>>>>>>>> [email protected]>>
>>>>>>>> Data: 01/01/2016 23:23 (GMT+01:00)
>>>>>>>> A: [email protected]<mailto:[email protected]>
>>>>>>>> Oggetto: A Story of a Failed XenServer Upgrade
>>>>>>>> 
>>>>>>>> Hi guys,
>>>>>>>> I want to share my XenServer Upgrade adventure to understand if I did
>>>>>>>> domething wrong.
>>>>>>>> I upgraded CS from 4.4.4 to 4.5.2 without any issues, after all the
>>>> VRs
>>>>>>>> has been upgraded I start the upgrade process of my XenServer hosts
>>>> from
>>>>>>>> 6.2 to 6.5.
>>>>>>>> I do not already have PoolHA enabled so I followed this article:
>>>> http://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>>>>>>>> 
>>>>>>>> The cluster consists of n° 3 XenServer hosts.
>>>>>>>> 
>>>>>>>> First of all I added manage.xenserver.pool.master=false
>>>>>>>> to environment.properties file and restarted cloudstack-management
>>>>>>> service.
>>>>>>>> 
>>>>>>>> After that I put in Maintenance Mode Pool Master host and, after all
>>>> VMs
>>>>>>>> has been migrated, I Unmanaged the cluster.
>>>>>>>> At this point all host appears as "Disconnected" from CS interface
>>>> and
>>>>>>>> this should be right.
>>>>>>>> Now I put XenServer 6.5 CD in the host in Maintenance Mode and start
>>>> a
>>>>>>>> in-place upgrade.
>>>>>>>> After XS6.5 has been installed, I istalled the 6.5SP1 and reboot
>>>> again.
>>>>>>>> At this point I expected that, after click on Manage Cluster on CS,
>>>> all
>>>>>>>> the hosts come back to "UP" and I could go ahead upgrading the other
>>>>>>>> hosts....
>>>>>>>> 
>>>>>>>> But, instead of that, all the hosts still appears as "Disconnected",
>>>> I
>>>>>>>> tried a couple of cloudstack-management service restart without
>>>> success.
>>>>>>>> 
>>>>>>>> So I opened XenCenter and connect to Pool Master I upgraded to 6.5
>>>> and it
>>>>>>>> appear in Maintenance Mode, so I tried to Exit from Maint Mode but I
>>>> got
>>>>>>>> the error: The server is still booting
>>>>>>>> 
>>>>>>>> After some investigation, I run the command "xe task-list" and this
>>>> is
>>>>>>> the
>>>>>>>> result:
>>>>>>>> 
>>>>>>>> uuid ( RO)                : 72f48a56-1d24-1ca3-aade-091f1830e2f1
>>>>>>>> name-label ( RO): VM.set_memory_dynamic_range name-description ( RO):
>>>>>>>> status ( RO): pending
>>>>>>>> progress ( RO): 0.000
>>>>>>>> 
>>>>>>>> I tried a couple of reboot but nothing changes.... so I decided to
>>>> shut
>>>>>>>> down the server, force raise a slave host to master with emergency
>>>> mode,
>>>>>>>> remove old server from CS and reboot CS.
>>>>>>>> 
>>>>>>>> After that, I see my cluster up and running again, so I installed XS
>>>>>>>> 6.2SP1 on the "upgraded" host and added again to the cluster....
>>>>>>>> 
>>>>>>>> So after an entire day of work, I'm in the same situation! :D
>>>>>>>> 
>>>>>>>> Anyone can tell me if I made something wrong??
>>>>>>>> 
>>>>>>>> Thank you very much!
>>>> 

Reply via email to