Fwd: Problem creating networks after 4.11 upgrade

2018-11-05 Thread Jean-Francois Nadeau
+dev

this is a considered blocker for an upgrade to 4.11.2 (at least for us):
https://github.com/apache/cloudstack/issues/2989

-- Forwarded message -
From: Jean-Francois Nadeau 
Date: Mon, Nov 5, 2018 at 7:21 AM
Subject: Re: Problem creating networks after 4.11 upgrade
To: 


Test was on 4.11.2rc3.  will send to dev list


Re: [VOTE] Apache CloudStack 4.11.2.0 RC4

2018-11-05 Thread Jean-Francois Nadeau
-1

Only because we believe this issue is a regression upgrading from 4.9.3.
Existing network offerings created under 4.9.3 should continue to work when
creating new networks under 4.11.2.  Please see
https://github.com/apache/cloudstack/issues/2989

best,

Jfn

On Mon, Nov 5, 2018 at 5:04 AM Boris Stoyanov 
wrote:

> +1
>
> I’ve done upgrade testing from 4.9 on a VMWare based environment and it
> went well, I think all the issues delivered with this RC are being verified
> and confirmed. I’ve also executed some general lifecycle tests around main
> components like (VMs, Networks, Volumes, Storages etc)
>
> Thanks,
> Bobby.
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> On 4 Nov 2018, at 3:52, Rohit Yadav  rohit.ya...@shapeblue.com>> wrote:
>
> +1 (binding) based on automated smoketests on xenserver/vmware/kvm and
> manual tests on kvm+centos7 based Adv zone env. I did not check gpg
> signatures on the artifact this time.
>
> Regards.
>
> Get Outlook for Android
>
> 
> From: Andrija Panic  andrija.pa...@gmail.com>>
> Sent: Saturday, November 3, 2018 10:24:45 PM
> To: users
> Cc: dev; Paul Angus
> Subject: Re: [VOTE] Apache CloudStack 4.11.2.0 RC4
>
> Assuming I may vote:
>
> +1 from my side
>
> Tested:
> - building DEB packages for Ubuntu
> - advanced and basic zone deployment (KVM, clean install 4.11.2)
> - upgrade from 4.8.0.1 to 4.11.2
> - a bunch of integration tests done from in-house suite of tests (system
> and user tests) - all PASS, with exception that RAW templates are broken -
> there is already a GitHub issue from 4.11:
> https://github.com/apache/cloudstack/issues/2820
> - online and offline storage migration from NFS/CEPH to SolidFire
>
>
> Some issues I experienced (perhaps something local to me, but managed to
> reproduce it many times):
> Management/Agent on Ubuntu 14.04:
> When upgrading existing 4.8 installation to 4.11.2, init.d scripts were not
> created/overwritten nor I was asked if I want to replace or keep existing
> versions (like it's done with i.e. agent.properties, db.properties, etc...)
> so this seems like some packaging issue.
> Clean install (or in my problematic case - a complete uninstall and
> install) is working fine in regards to init.d scripts
>
> Cheers
> Andrija
>
>
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> On Fri, 2 Nov 2018 at 16:36, Wido den Hollander  w...@widodh.nl>> wrote:
>
> +1 (binding)
>
> I've tested:
>
> - Building DEB packages for Ubuntu
> - Install DEB packages
> - Upgrade from 4.11.1 to 4.11.2
>
> Wido
>
> On 10/30/18 5:10 PM, Paul Angus wrote:
> Hi All,
>
> By popular demand, I've created a 4.11.2.0 release (RC4), with the
> following artefacts up for testing and a vote:
>
> Git Branch and Commit SH:
>
>
> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20181030T1040
> Commit: 840ad40017612e169665fa799a6d31a23ecad347
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/
>
> PGP release keys (signed using 8B309F7251EE0BC8):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> The vote will be open until Sunday 4th November.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Additional information:
>
> For users' convenience, I've built packages from
> 840ad40017612e169665fa799a6d31a23ecad347 and published RC4 repository here:
> http://packages.shapeblue.com/testing/41120rc4/
>
> The release notes are still work-in-progress, but the systemvm template
> upgrade section has been updated. You may refer the following for systemvm
> template upgrade testing:
>
>
> http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
>
> 4.11.2 systemvm templates are as before and available from here:
> http://packages.shapeblue.com/testing/systemvm/4112rc3
>
>
>
>
> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com http://www.shapeblue.com/>>
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
>
>
> --
>
> Andrija Panić
>
>


Fwd: SSVM, templates and managed storage (iscsi/KVM)... how does it work ?

2019-03-03 Thread Jean-Francois Nadeau
Hi all,

Im kicking the tires with managed storage with under 4.11.2 with KVM and
Datera as primary storage.

My first attempt at creating a VM from a template stored on NFS secondary
failed silently. Looking at the SSVM cloud logs I saw no exception.  The VM
root disks gets properly created on the backend and attached on the KVM
host but the block device is blank.  Somehow the template did not get
copied over.

Starting troubleshooting from this point... I realize I don't understand
how this work vs what Im used to with NFS as both primary and secondary
storage.

I presume the SSVM has to copy the qcow2 template from the NFS secondary to
the primary storage but this one is iscsi now... and I did not setup
initiator access to the SSVM or found instructions I need to do that.

Can someone fill the blank on to how this work ?

thanks all,

Jean-Francois


Re: SSVM, templates and managed storage (iscsi/KVM)... how does it work ?

2019-03-03 Thread Jean-Francois Nadeau
Hi Ivan,

SS is still on NFS in this case.  It's the primary storage in trying to
move on iscsi managed storage so just like Solidfire/Ceph on primary.

While NFS is used on both SS and primary,  I understand the SSVM just
mounts both shares and copies templates when needed.

But if you have a qcow2 template on SS,  and primary on managed storage,  I
presume the SSVM copies the qcow2 into the just dynamically created iscsi
volume in a raw way  but I don't know that is what Im trying to connect
the dots :)


On Sun, Mar 3, 2019 at 2:02 PM Ivan Kudryavtsev 
wrote:

> Jean-Francois,
> NFS primary, NFS secondary worked always like a charm. It's the most used
> way for CloudStack, I suppose. It works great for my 4.11.2, worked great
> for 4.11.1 and even for full-trash 4.10 release.
>
> I have never tried SS on iSCSI and suppose it's a wrong way to go except
> the way you use Solidfire. For KVM you must use NFS, S3 or Swift.
>
>
> вс, 3 мар. 2019 г. в 13:50, Jean-Francois Nadeau :
>
> > Hi all,
> >
> > Im kicking the tires with managed storage with under 4.11.2 with KVM and
> > Datera as primary storage.
> >
> > My first attempt at creating a VM from a template stored on NFS secondary
> > failed silently. Looking at the SSVM cloud logs I saw no exception.  The
> VM
> > root disks gets properly created on the backend and attached on the KVM
> > host but the block device is blank.  Somehow the template did not get
> > copied over.
> >
> > Starting troubleshooting from this point... I realize I don't understand
> > how this work vs what Im used to with NFS as both primary and secondary
> > storage.
> >
> > I presume the SSVM has to copy the qcow2 template from the NFS secondary
> to
> > the primary storage but this one is iscsi now... and I did not setup
> > initiator access to the SSVM or found instructions I need to do that.
> >
> > Can someone fill the blank on to how this work ?
> >
> > thanks all,
> >
> > Jean-Francois
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Re: SSVM, templates and managed storage (iscsi/KVM)... how does it work ?

2019-03-03 Thread Jean-Francois Nadeau
I just found my problem in regards to the managed storage for some
reason,  the hypervisor type on the primary storage must be set to any and
not KVM (just Like solidfire).The VM now boots from the template but Im
still wondering how did that happen :)


On Sun, Mar 3, 2019 at 2:33 PM Ivan Kudryavtsev 
wrote:

> Not sure, KVM works on arbitrary iSCSI target except the Solidfire.
>
> вс, 3 мар. 2019 г. в 14:10, Jean-Francois Nadeau :
>
> > Hi Ivan,
> >
> > SS is still on NFS in this case.  It's the primary storage in trying to
> > move on iscsi managed storage so just like Solidfire/Ceph on primary.
> >
> > While NFS is used on both SS and primary,  I understand the SSVM just
> > mounts both shares and copies templates when needed.
> >
> > But if you have a qcow2 template on SS,  and primary on managed
> storage,  I
> > presume the SSVM copies the qcow2 into the just dynamically created iscsi
> > volume in a raw way  but I don't know that is what Im trying to
> connect
> > the dots :)
> >
> >
> > On Sun, Mar 3, 2019 at 2:02 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Jean-Francois,
> > > NFS primary, NFS secondary worked always like a charm. It's the most
> used
> > > way for CloudStack, I suppose. It works great for my 4.11.2, worked
> great
> > > for 4.11.1 and even for full-trash 4.10 release.
> > >
> > > I have never tried SS on iSCSI and suppose it's a wrong way to go
> except
> > > the way you use Solidfire. For KVM you must use NFS, S3 or Swift.
> > >
> > >
> > > вс, 3 мар. 2019 г. в 13:50, Jean-Francois Nadeau <
> the.jfnad...@gmail.com
> > >:
> > >
> > > > Hi all,
> > > >
> > > > Im kicking the tires with managed storage with under 4.11.2 with KVM
> > and
> > > > Datera as primary storage.
> > > >
> > > > My first attempt at creating a VM from a template stored on NFS
> > secondary
> > > > failed silently. Looking at the SSVM cloud logs I saw no exception.
> > The
> > > VM
> > > > root disks gets properly created on the backend and attached on the
> KVM
> > > > host but the block device is blank.  Somehow the template did not get
> > > > copied over.
> > > >
> > > > Starting troubleshooting from this point... I realize I don't
> > understand
> > > > how this work vs what Im used to with NFS as both primary and
> secondary
> > > > storage.
> > > >
> > > > I presume the SSVM has to copy the qcow2 template from the NFS
> > secondary
> > > to
> > > > the primary storage but this one is iscsi now... and I did not setup
> > > > initiator access to the SSVM or found instructions I need to do that.
> > > >
> > > > Can someone fill the blank on to how this work ?
> > > >
> > > > thanks all,
> > > >
> > > > Jean-Francois
> > > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks LLC
> > > Cell RU: +7-923-414-1515
> > > Cell USA: +1-201-257-1512
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Re: SSVM, templates and managed storage (iscsi/KVM)... how does it work ?

2019-03-05 Thread Jean-Francois Nadeau
Thanks Syed,

That is super helpfull.

Jfn

On Mon, Mar 4, 2019 at 5:24 AM Syed Ahmed  wrote:

> Hi JF,
>
> So with the root Disk on a managed storage, there are two options:
>
> 1. If the backend+hypervisor can support cloning then an initial template
> will be copied to the Managed Storage and then all subsequent VMs with ROOT
> disk on this storage will basically
> clone the "template" volume.
>
> 2. If backend+hypervisor don't support cloning then we create an empty
> volume, mount it on the hypervisor (iscsi login), copy the template data,
> unmount it and then create a VM using this volume
>
> Hope this helps
>
> Thanks,
> -Syed
>
> On Sun, Mar 3, 2019 at 2:48 PM Jean-Francois Nadeau <
> the.jfnad...@gmail.com>
> wrote:
>
> > I just found my problem in regards to the managed storage for some
> > reason,  the hypervisor type on the primary storage must be set to any
> and
> > not KVM (just Like solidfire).The VM now boots from the template but
> Im
> > still wondering how did that happen :)
> >
> >
> > On Sun, Mar 3, 2019 at 2:33 PM Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Not sure, KVM works on arbitrary iSCSI target except the Solidfire.
> > >
> > > вс, 3 мар. 2019 г. в 14:10, Jean-Francois Nadeau <
> the.jfnad...@gmail.com
> > >:
> > >
> > > > Hi Ivan,
> > > >
> > > > SS is still on NFS in this case.  It's the primary storage in trying
> to
> > > > move on iscsi managed storage so just like Solidfire/Ceph on primary.
> > > >
> > > > While NFS is used on both SS and primary,  I understand the SSVM just
> > > > mounts both shares and copies templates when needed.
> > > >
> > > > But if you have a qcow2 template on SS,  and primary on managed
> > > storage,  I
> > > > presume the SSVM copies the qcow2 into the just dynamically created
> > iscsi
> > > > volume in a raw way  but I don't know that is what Im trying to
> > > connect
> > > > the dots :)
> > > >
> > > >
> > > > On Sun, Mar 3, 2019 at 2:02 PM Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > > wrote:
> > > >
> > > > > Jean-Francois,
> > > > > NFS primary, NFS secondary worked always like a charm. It's the
> most
> > > used
> > > > > way for CloudStack, I suppose. It works great for my 4.11.2, worked
> > > great
> > > > > for 4.11.1 and even for full-trash 4.10 release.
> > > > >
> > > > > I have never tried SS on iSCSI and suppose it's a wrong way to go
> > > except
> > > > > the way you use Solidfire. For KVM you must use NFS, S3 or Swift.
> > > > >
> > > > >
> > > > > вс, 3 мар. 2019 г. в 13:50, Jean-Francois Nadeau <
> > > the.jfnad...@gmail.com
> > > > >:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Im kicking the tires with managed storage with under 4.11.2 with
> > KVM
> > > > and
> > > > > > Datera as primary storage.
> > > > > >
> > > > > > My first attempt at creating a VM from a template stored on NFS
> > > > secondary
> > > > > > failed silently. Looking at the SSVM cloud logs I saw no
> exception.
> > > > The
> > > > > VM
> > > > > > root disks gets properly created on the backend and attached on
> the
> > > KVM
> > > > > > host but the block device is blank.  Somehow the template did not
> > get
> > > > > > copied over.
> > > > > >
> > > > > > Starting troubleshooting from this point... I realize I don't
> > > > understand
> > > > > > how this work vs what Im used to with NFS as both primary and
> > > secondary
> > > > > > storage.
> > > > > >
> > > > > > I presume the SSVM has to copy the qcow2 template from the NFS
> > > > secondary
> > > > > to
> > > > > > the primary storage but this one is iscsi now... and I did not
> > setup
> > > > > > initiator access to the SSVM or found instructions I need to do
> > that.
> > > > > >
> > > > > > Can someone fill the blank on to how this work ?
> > > > > >
> > > > > > thanks all,
> > > > > >
> > > > > > Jean-Francois
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > With best regards, Ivan Kudryavtsev
> > > > > Bitworks LLC
> > > > > Cell RU: +7-923-414-1515
> > > > > Cell USA: +1-201-257-1512
> > > > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > > >
> > > >
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks LLC
> > > Cell RU: +7-923-414-1515
> > > Cell USA: +1-201-257-1512
> > > WWW: http://bitworks.software/ <http://bw-sw.com/>
> > >
> >
>


Re: Any plan for 4.11 LTS extension ?

2019-03-15 Thread Jean-Francois Nadeau
I can only agree with Haijiao,  that 4.11 deserves a longer time span.
Because many bugs are found naturally between the .0 and .1 of the next LTS
release with it's adoption.

For us 4.11 could only be adopted with 4.11.2.0 after several bugs needed
to get resolved.   So if 4.11's support stops in July then it's time span
was only 6 months from our perspective.

-jfn

On Fri, Mar 15, 2019 at 5:34 AM Wido den Hollander  wrote:

>
>
> On 3/15/19 10:20 AM, Riepl, Gregor (SWISS TXT) wrote:
> > Hi Giles,
> >
> >> I would *expect*  4.13.0 (LTS) to be released in Q2, which  will
> >> supersede the 4.11 branch as the current LTS branch
> >
> > Are you confident that this schedule can be kept?
> > 4.12 is still in RC right now, and I don't think it's a good idea to
> > rush another major release in just 3 months...
> >
> 4.11.3 will be released first with some bugfixes to keep 4.11 a proper
> release.
>
> 4.12 needs to go out now so that we can test and prepare for 4.13. I'm
> confident we can have a stable and proper 4.13 release as long as we
> don't keep the window open for too long.
>
> The major problem is having the master branch open for a long time,
> features going in and people not testing it sufficiently.
>
> By having a relatively short period between 4.12 and 4.13 we can catch
> most bugs and stabilize for a proper LTS.
>
> Wido
>


Re: Latest Qemu KVM EV appears to be broken with ACS

2019-04-26 Thread Jean-Francois Nadeau
Can we consider merging this fix in 4.11.3  as well ?For those like us
that would really want make to jump on qemu-ev versions but also want to
stick to CS LTS releases.

best,

Jfn

On Tue, Apr 23, 2019 at 6:54 AM Rohit Yadav 
wrote:

> All,
>
>
> I've found and fixed an edge/security case while testing it for CentOS6,
> the PR should not be compatible for all support KVM distros:
>
> https://github.com/apache/cloudstack/pull/3278
>
>
> The issue was that the systemvm.iso file includes an authorized_keys file
> from our codebase and may overwrite the payload we send using
> patchviasocket or virsh qemu-guest-agent. I've removed that unknown/default
> authorized_keys file in the PR.
>
>
> Historically, we had seen few cases where a VR failed to start with an
> error related to get_systemvm_template.sh execution (failing with a
> non-zero exit code) that finds the DomR version seen in logs. That issue
> would be fixed by my patch now.
>
>
> Regards,
>
> Rohit Yadav
>
> Software Architect, ShapeBlue
>
> https://www.shapeblue.com
>
> 
> From: Simon Weller 
> Sent: Tuesday, April 23, 2019 12:59:00 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
>
> Hey  Andrija,
>
> In our case the SystemVMs were booting fine, but ACS wasn't able to inject
> the payload via the socket.
>
> -Si
>
> 
> From: Andrija Panic 
> Sent: Monday, April 22, 2019 1:16 PM
> To: dev
> Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
>
> Hi Simon, all,
>
> did you try running CentOS with newer kernel - I just got a really strange
> issue after upgrading KVM host from stock 1.5.3 to qemu-kvm-ev 2.12 with
> stock kernel 3.10 (issues on Intel CPUs, while no issues on AMD Opteron),
> which was fixed by upgrading kernel to 4.4 (Elrepo version).
>
> My case was that SystemVM were not able to boot, stuck on "booting from
> hard drive" SeaBios message (actually any VM with VirtIO "hardware") using
> qemu-kvm-ev 2.12 (while no issues on stock 1.5.3).
>
> What I could find is the that there are obviously some issues when using
> nested KVM on top of ESXi (or HyperV), which is what I'm running.
> When I switched template to Intel emulated one i.e. "Windows 2016" OS type
> - VMs were able to boot just fine (user VM at least).
>
> Might be related to original issue on this thread...
>
> Best,
> Andrija
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> On Thu, 18 Apr 2019 at 22:36, Sven Vogel  wrote:
>
> > Hi Rohit,
> >
> > Thx we will test it!
> >
> >
> >
> > Von meinem iPhone gesendet
> >
> >
> > __
> >
> > Sven Vogel
> > Teamlead Platform
> >
> > EWERK RZ GmbH
> > Brühl 24, D-04109 Leipzig
> > P +49 341 42649 - 11
> > F +49 341 42649 - 18
> > s.vo...@ewerk.com
> > www.ewerk.com
> >
> > Geschäftsführer:
> > Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard Hoyer
> > Registergericht: Leipzig HRB 17023
> >
> > Zertifiziert nach:
> > ISO/IEC 27001:2013
> > DIN EN ISO 9001:2015
> > DIN ISO/IEC 2-1:2011
> >
> > EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
> >
> > Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
> >
> > Disclaimer Privacy:
> > Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien)
> ist
> > vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> > bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> > Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> > informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> > die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem
> System.
> > Vielen Dank.
> >
> > The contents of this e-mail (including any attachments) are confidential
> > and may be legally privileged. If you are not the intended recipient of
> > this e-mail, any disclosure, copying, distribution or use of its contents
> > is strictly prohibited, and you should please notify the sender
> immediately
> > and then delete it (including any attachments) from your system. Thank
> you.
> > > Am 18.04.2019 um 21:44 schrieb Rohit Yadav  >:
> > >
> > > I've sent a PR that attempts to solve the issue. It is under testing
> but
> > ready for review: https://github.com/apache/cloudstack/pull/3278
> > >
> > >
> > > Thanks.
> > >
> > >
> > > Regards,
> > >
> > > Rohit Yadav
> > >
> > > Software Architect, ShapeBlue
> > >
> > > https://www.shapeblue.com
> > >
> > > 
> > > From: Simon Weller 
> > > Sent: Monday, April 15, 2019 7:24:40 PM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
> > >
> > > +1 for the qemu guest agent approach.
> > >
> > >
> > > 
> > > From: Wido den Hollander 
> > > Sent: Saturday, April 13, 2019 2:32 PM
> > > To: dev@cloudstack.apache.org; Rohit Yadav
> > >

Using S3/Minio as the only secondary storage

2019-07-16 Thread Jean-Francois Nadeau
Hello Everyone,

I was wondering if it was common or even recommended to use an S3
compatible storage system as the only secondary storage provider ?

The environment is 4.11.3.0 with KVM (Centos 7.6),  and our tier1 storage
solution also provides an S3 compatible object store (apparently Minio
under the hood).

I have always used NFS to install the SSVM templates and the install script
(cloud-install-sys-tmplt) only takes a mount point.  How, if possible,
would I proceed with S3 only storage ?

best,

Jean-Francois


Re: Using S3/Minio as the only secondary storage

2019-07-17 Thread Jean-Francois Nadeau
Thanks Will,

I remember having the discussion with Pierre-Luc on his use of Swift for
templates.  I was curious about the differences on S3 vs Swift for SS since
looking at the CS UI when it comes to setting up an S3 image store... the
NFS staging is optional.  And this make sense to me if your object storage
is fast and accessible locally,  why the need for staging/caching.The
documentation could mention if it is possible to use S3 secondary and
nothing else,  starting with if SSVM templates can be uploaded to a
bucket.I will certainly ask Syed later today :)

best

Jfn

On Wed, Jul 17, 2019 at 6:59 AM Will Stevens  wrote:

> Hey JF,
> We use the Swift object store as the storage backend for secondary
> storage.  I have not tried the S3 integration, but the last time I looked
> at the code for this (admittedly, a long time ago) the Swift and s3 logic
> was more intertwined than I liked. The CloudOps/cloud.ca team had to do a
> lot of work to get the Swift integration to a reasonable working state. I
> believe all of our changes have been upstreamed quite some time ago. I
> don't know if anyone is doing this for the S3 implementation.
>
> I can't speak to the S3 implementation because I have not looked at it in a
> very long time, but the Swift implementation requires a "temporary NFS
> staging area" that essentially acts kind of like a buffer between the
> object store and primary storage when templates and such are used by the
> hosts.
>
> I think Pierre-Luc and Syed have a clearer picture of all the moving
> pieces, but that is a quick summary of what I know without digging in.
>
> Hope that helps.
>
> Cheers,
>
> Will
>
> On Tue, Jul 16, 2019, 10:24 PM Jean-Francois Nadeau <
> the.jfnad...@gmail.com>
> wrote:
>
> > Hello Everyone,
> >
> > I was wondering if it was common or even recommended to use an S3
> > compatible storage system as the only secondary storage provider ?
> >
> > The environment is 4.11.3.0 with KVM (Centos 7.6),  and our tier1 storage
> > solution also provides an S3 compatible object store (apparently Minio
> > under the hood).
> >
> > I have always used NFS to install the SSVM templates and the install
> script
> > (cloud-install-sys-tmplt) only takes a mount point.  How, if possible,
> > would I proceed with S3 only storage ?
> >
> > best,
> >
> > Jean-Francois
> >
>


Re: Ansible deploy cloudstack and add zone

2019-07-24 Thread Jean-Francois Nadeau
Hi Li,

We are using Ansible to build and manage capacity of our cloudstack
environments end to end.  From setting up zones, projects, networks,
compute offering, pods, clusters and hosts.

Make sure to use current Ansible 2.8 or newer.   See
https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html

regards,

Jfn

On Wed, Jul 24, 2019 at 12:23 PM li jerry  wrote:

> Hello All
>
> I saw in the community documentation that you can deploy a set of
> cloudstacks via ansbile.
> Is there any way to quickly initialize CLOUDSTACK (create ZONE, add host,
> etc.) through ansbile or other tools?
>


Re: [DISCUSS] VMs crashing/stopped during live migration?

2019-11-21 Thread Jean-Francois Nadeau
Hi Andrija,

We experienced that problem with stock packages on CentOS 7.4.Live
migration would frequently fail and leave the VM dead.We since moved to
RHEV packages for qemu.  Libvirt is still stock per CentoS 7.6 (4.5).   I
want to say the situation improved but I can't tell yet if we have a 100%
success rate on live migrations (as it should be !)

Redhat also have been messing up severely with stock  libvirt versions
between 7.4/7.5/7.6 in such way it broke live migration compatibility (cpu
definitions).   Im at the crossroads right now to entirely ditch
centos/redhat in favor of Ubuntu to have well tested stock packages.

best,

-Jfn



On Thu, Nov 21, 2019 at 5:25 PM Andrija Panic 
wrote:

> Hi guys.
>
> I wanted to see if any of you have seen similar/same in master, as below.
>
> I've been testing some work/PRs (against the current master) and I've seen
> that VMs will crash/be stopped occasionally when live migration is
> happening. I experienced this on an NEW/EMPTY env, with 2 KVM hosts, and
> only SSVM and CPVM - not a capacity issues or similar.
>
> This is happening with CentOS 7 (CentOS 7.3 I believe, but we also updated
> packages to the latest stock ones and same issue was happening again).
>
> This is still under investigation, but I was wondering if anyone else has
> seen similar thing happening?
>
> Best,
>
> --
>
> Andrija Panić
>


Re: [DISCUSS] VMs crashing/stopped during live migration?

2019-11-21 Thread Jean-Francois Nadeau
We saw the issue on both 4.9.3 and 4.11.2.   This seems to be a race in
libvirt itself  and was hit mostly when we put host in maintenance and 5
live migrations are processed in parallel.   I don't recall triggering the
bug migrating a single VM at a time.

On Thu, Nov 21, 2019 at 7:48 PM Andrija Panic 
wrote:

> That sucks...thx both.
>
> @both - which ACS version do you use (and encounter such issues?)
>
> Ubuntu comes with a whole another set of issues (I was losing my nerves
> around very idiotic things, last time a week ago...) - though most can be
> managed with some workarounds.
> But yes, Qemu/libvirt should be better with Ubuntu - free of RedHat s$^%tty
> business politics - i.e. in CentOS 6.x you were able to live migrate VM
> WITH all the volumes to another host/storage. On CentOS 7 you can't do that
> any more, unless you are using qemu-kvm-ev (but not the regular one from
> the SIG CentOS repo, you need the one from the oVirt project)
>
> I'm just trying to understand if this is happening also on i.e. ACS 4.11 -
> so to stop digging around the problem (and assume it's purely CentOS which
> is broken - why all great things need to come to an end...damn it)
>
> (well I could also test same ACS code on Ubuntu and see if no issues there
> with live migrations..)
>
> Thanks
> Andrija
>
> On Thu, 21 Nov 2019 at 23:39, Jean-Francois Nadeau  >
> wrote:
>
> > Hi Andrija,
> >
> > We experienced that problem with stock packages on CentOS 7.4.Live
> > migration would frequently fail and leave the VM dead.We since moved
> to
> > RHEV packages for qemu.  Libvirt is still stock per CentoS 7.6 (4.5).   I
> > want to say the situation improved but I can't tell yet if we have a 100%
> > success rate on live migrations (as it should be !)
> >
> > Redhat also have been messing up severely with stock  libvirt versions
> > between 7.4/7.5/7.6 in such way it broke live migration compatibility
> (cpu
> > definitions).   Im at the crossroads right now to entirely ditch
> > centos/redhat in favor of Ubuntu to have well tested stock packages.
> >
> > best,
> >
> > -Jfn
> >
> >
> >
> > On Thu, Nov 21, 2019 at 5:25 PM Andrija Panic 
> > wrote:
> >
> > > Hi guys.
> > >
> > > I wanted to see if any of you have seen similar/same in master, as
> below.
> > >
> > > I've been testing some work/PRs (against the current master) and I've
> > seen
> > > that VMs will crash/be stopped occasionally when live migration is
> > > happening. I experienced this on an NEW/EMPTY env, with 2 KVM hosts,
> and
> > > only SSVM and CPVM - not a capacity issues or similar.
> > >
> > > This is happening with CentOS 7 (CentOS 7.3 I believe, but we also
> > updated
> > > packages to the latest stock ones and same issue was happening again).
> > >
> > > This is still under investigation, but I was wondering if anyone else
> has
> > > seen similar thing happening?
> > >
> > > Best,
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
>
>
> --
>
> Andrija Panić
>