It's not just 6.0.2 which requires a license.  For 6.1 you also needed a
license to enable XenServer HA.  It's only 6.2 and higher (and legacy XCP)
which don't require a license for XenServer HA.  Thankfully you can detect
the version, and if a license is present, which means we can do the right
thing for older XenServer versions with a license.

The complication comes when a license existed and is about to expire.  When
a legacy license is used (6.1 and prior), if the license expires all
features keep running, but you can't create anything which requires an
operational license.  I don't know what that would mean for pool HA since
we're not talking about protecting VMs.

btw, in all versions of XenServer, you can just enable HA without requiring
VMs to also be protected.  The XenCenter UI makes it look like enabling HA
for the first time requires a VM be specified, but you don't.  The UI is
just making an assumption.

So for legacy, I can see modifying my flow to include:

- If legacy host and licensed follow 6.2 and higher, but if not licensed
degrade to 4.3 and prior but warn admin at time of host creation.
- If legacy host, monitor license state and add warning if license due to
expire. (Could also see doing this for vSphere as convenience)

-tim

On Tue, May 5, 2015 at 8:34 AM, Remi Bergsma <r...@remi.nl> wrote:

> What if we follow the vendor here? Citrix supports XenServer 6.0.2 until
> June 2018:
> https://www.citrix.nl/support/product-lifecycle/product-matrix.html
>
> In my opinion we should support it. I don't want to leave people at 4.3.x
> because of this.
>
> Regards,
> Remi
>
>
> Op di 5 mei 2015 om 13:28 schreef S. Brüseke - proIO GmbH <
> s.brues...@proio.com>:
>
> > In my opinion the way we should go is "keep it simple". We really should
> > consider to drop support for XenServer 6.0.2 here and not making things
> > more complicated and provide more than 1 option.
> > Of course it depends on how many CS installations are still using
> > XenServer 6.0.2.!
> > Can somebody give more information on this?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen Brüseke
> >
> > -----Ursprüngliche Nachricht-----
> > Von: Remi Bergsma [mailto:r...@remi.nl]
> > Gesendet: Dienstag, 5. Mai 2015 12:36
> > An: dev@cloudstack.apache.org
> > Betreff: Re: [DISCUSS] XenServer and HA: the way forward
> >
> > Hi all,
> >
> > Thanks for pointing me to the proposal, Koushik. Too bad no one responded
> > to such a major change.
> >
> > When I put my "Operations" hat on, I see several issues. First of all,
> > there was no mention of this change in the release notes. Not as a new
> > feature, nor as a bug that was fixed. How do we expect people that
> operate
> > CloudStack will know about this? It's not even in the recommended install
> > instructions for a new cloud today. In my experience, when I talk to
> people
> > about this, I find that almost no one knows. We as a community can do
> > better than this!
> >
> > For XenServer 6.5 and 6.2 one can enable XenHA for the pool only, but for
> > XenServer 6.0.2 this is a different story because as far as I know one
> can
> > only enable HA as a whole (HA on pool + HA on VMs). And this is only if
> you
> > have the paid version, which we happen to have. But I don't think it is a
> > solution, as this leads to corruption when XenServer and CloudStack try
> to
> > recover the same VM at the same time (Trust me, I've been there). Why do
> we
> > even "support" 6.0.2 one could ask?
> >
> > I still do have some XenServer 6.0.2 clusters running.. If the pool
> master
> > crashes I need to manually appoint a new one. I don't like manual work
> and
> > if I had known before I wouldn't have upgraded before this was resolved.
> Or
> > do I miss something here?
> >
> > If we want to drop support for older XenServer versions, then let's vote
> > for it and be very clear about it. Dropping XenServer 6.0.2 comes a bit
> too
> > early if  you ask me.
> >
> > Let's discuss how to proceed. I still feel the best solution is to add a
> > switch between both HA methods so one can choose which one suits best,
> and
> > for older XenServer versions we will restore the HA feature that way.
> >
> > Any thoughts?
> >
> > Regards,
> > Remi
> >
> > Op di 5 mei 2015 om 08:13 schreef Koushik Das <koushik....@citrix.com>:
> >
> > > The below is the proposal for switching to XenServer HA.
> > >
> > >
> > > http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201403.mbox/%3
> > > c83f77ff1bd50194ab65afa5479d082a71e4...@sjcpex01cl02.citrite.net%3E
> > >
> > >
> > > On 04-May-2015, at 9:03 PM, Tim Mackey <tmac...@gmail.com> wrote:
> > >
> > > > Thanks for starting this thread Remi.
> > > >
> > > > From my perspective the pros of simply enabling XenServer HA are:
> > > >
> > > > - automatic election of pool master in the event of hardware failure
> > > > - automatic fencing of a host in the event of dom0 corruption
> > > > - automatic fencing of a host in the event of heartbeat failure
> > > >
> > > > The risks of simply enabling XenServer HA are:
> > > >
> > > > - additional code to detect a newly elected pool master
> > > > - acceptance of the fact an admin can force a new pool master from
> > > > XenServer CLI
> > > > - requirement for pool size to be greater than 2 (pool size of 2
> > > > results
> > > in
> > > > semi-deterministic fencing which isn't user obvious)
> > > > - understanding that storage heartbeat can be shorter than storage
> > > timeout
> > > > (aggressive fencing)
> > > > - understanding that HA plans are computed even when no VMs are
> > > > protected (performance decrease)
> > > >
> > > > One question we'll want to decide on is who is the primary actor
> > > > when it comes to creating the pool since that will define the first
> > pool master.
> > > > During my demo build using 4.4 at CCCEU I expected to add pool
> > > > members through the CS UI, but found that adding them in XenServer
> was
> > required.
> > > > This left me in an indeterminate state wrt pool members.
> > > >
> > > > I vote that if a host is added to CS and it *is* already a member of
> > > > a pool, that the pool be imported as a cluster and any future
> > > > membership changes happen using CS APIs.  If a host is added which
> > > > isn't a member
> > > of a
> > > > pool, then the user be asked if they wish to add it to an existing
> > > cluster
> > > > (and behind the scenes add it to a pool), or create a new cluster
> > > > and add it to that cluster.  This would be a change to the "add host"
> > semantics.
> > > > Once the host is added, we can enable XenServer HA on the pool if it
> > > > satisfies the requirements for XenServer HA (has shared storage and
> > > > three or more pool members).
> > > >
> > > > There are some details we'd want to take care of, but this flow
> > > > makes
> > > sense
> > > > to me, and we could use it even with upgrades.
> > > >
> > > > -tim
> > > >
> > > > On Mon, May 4, 2015 at 6:04 AM, Remi Bergsma <r...@remi.nl> wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> Since CloudStack 4.4 the implementation of HA in CloudStack was
> > > >> changed
> > > to
> > > >> use the XenHA feature of XenServer. As of 4.4, it is expected to
> > > >> have
> > > XenHA
> > > >> enabled for the pool (not for the VMs!) and so XenServer will be
> > > >> the
> > > one to
> > > >> elect a new pool master, whereas CloudStack did it before. Also,
> > > >> XenHA takes care of fencing the box instead of CloudStack should
> > > >> storage be unavailable. To be exact, they both try to fence but
> > > >> XenHA is usually faster.
> > > >>
> > > >> To be 100% clear: HA on VMs is in all cases done by CloudStack.
> > > >> It's
> > > just
> > > >> that without a pool master, no VMs will be recovered anyway. This
> > > brought
> > > >> some headaches to me, as first of all I didn't know. We probably
> > > >> need to document this somewhere. This is important, because without
> > > >> XenHA
> > > turned on
> > > >> you'll not get a new pool master (a behaviour change).
> > > >>
> > > >> Personally, I don't like the fact that we have "two captains" in
> > > >> case something goes wrong. But, some say they like this behaviour.
> > > >> I'm OK
> > > with
> > > >> both, as long as one can choose whatever suits their needs best.
> > > >>
> > > >> In Austin I talked to several people about this. We came up with
> > > >> the
> > > idea
> > > >> to have CloudStack check whether XenHA is on or not. If it is, it
> > > >> does
> > > the
> > > >> current 4.4+ behaviour (XenHA selects new pool master). When it is
> > > >> not,
> > > we
> > > >> do the CloudStack 4.3 behaviour where CloudStack is fully in
> control.
> > > >>
> > > >> I also talked to Tim Mackey and he wants to help implement this,
> > > >> but he doesn't have much time. The idea is to have someone else
> > > >> join in to code the change and then Tim will be able to help out on
> > > >> a regularly basis should we need in depth knowledge of XenServer or
> > > >> its implementation in CloudStack.
> > > >>
> > > >> Before we kick this off, I'd like to discuss and agree that this is
> > > >> the
> > > way
> > > >> forward. Also, if you're interested in joining this effort let me
> > > >> know
> > > and
> > > >> I'll kick it off.
> > > >>
> > > >> Regards,
> > > >> Remi
> > > >>
> > >
> > >
> >
> >
> >
> > - proIO GmbH -
> > Geschäftsführer: Swen Brüseke
> > Sitz der Gesellschaft: Frankfurt am Main
> >
> > USt-IdNr. DE 267 075 918
> > Registergericht: Frankfurt am Main - HRB 86239
> >
> > Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
> > Informationen.
> > Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich
> > erhalten haben,
> > informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
> > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind
> > nicht gestattet.
> >
> > This e-mail may contain confidential and/or privileged information.
> > If you are not the intended recipient (or have received this e-mail in
> > error) please notify
> > the sender immediately and destroy this e-mail.
> > Any unauthorized copying, disclosure or distribution of the material in
> > this e-mail is strictly forbidden.
> >
> >
> >
>

Reply via email to