Re: Primary interface on Windows templates

2017-10-04 Thread Dmitriy Kaluzhniy
Hello,
Thank you for your answer, Ivan. Yes, I think it will work, but it is very
unstable variant. I hope it can be done in other way.

2017-10-02 17:46 GMT+03:00 Ivan Kudryavtsev :

> Hi, I believe that if you change os type to linux, you'll get it. But it
> could lead to problems with storage drivers as acs will announce it as
> virtio too.
>
> 2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
> dmitriy.kaluzh...@gmail.com> написал:
>
> > Hello,
> > I was working with templates and find out that Windows templates
> > automatically gets E1000 interface. Is there any way to change it to
> > Virtio?
> >
> > --
> >
> >
> >
> >
> > *​Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> >
>



-- 



*--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*


Re: Primary interface on Windows templates

2017-10-04 Thread Dmitriy Kaluzhniy
As I found - it is hardcoded here -
libvirtComputingResource.isGuestPVEnabled
So, there are two ways or I change it in code, or I set OS Type for
template "Windows PV" or "Other PV". What is the difference between this
two types?

2017-10-04 14:36 GMT+03:00 Dmitriy Kaluzhniy :

> Hello,
> Thank you for your answer, Ivan. Yes, I think it will work, but it is very
> unstable variant. I hope it can be done in other way.
>
> 2017-10-02 17:46 GMT+03:00 Ivan Kudryavtsev :
>
>> Hi, I believe that if you change os type to linux, you'll get it. But it
>> could lead to problems with storage drivers as acs will announce it as
>> virtio too.
>>
>> 2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
>> dmitriy.kaluzh...@gmail.com> написал:
>>
>> > Hello,
>> > I was working with templates and find out that Windows templates
>> > automatically gets E1000 interface. Is there any way to change it to
>> > Virtio?
>> >
>> > --
>> >
>> >
>> >
>> >
>> > *​Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
>> >
>>
>
>
>
> --
>
>
>
> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>



-- 



*--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*


Re: Primary interface on Windows templates

2017-10-04 Thread Simon Weller
Also note that as of 4.10 there is new support for Virtio-Scsi on KVM


Check out this PR note for an example of how to set it up on a template:

https://github.com/apache/cloudstack/pull/1955#issuecomment-284440859


- Si






From: Dmitriy Kaluzhniy 
Sent: Wednesday, October 4, 2017 8:00 AM
To: dev@cloudstack.apache.org
Subject: Re: Primary interface on Windows templates

As I found - it is hardcoded here -
libvirtComputingResource.isGuestPVEnabled
So, there are two ways or I change it in code, or I set OS Type for
template "Windows PV" or "Other PV". What is the difference between this
two types?

2017-10-04 14:36 GMT+03:00 Dmitriy Kaluzhniy :

> Hello,
> Thank you for your answer, Ivan. Yes, I think it will work, but it is very
> unstable variant. I hope it can be done in other way.
>
> 2017-10-02 17:46 GMT+03:00 Ivan Kudryavtsev :
>
>> Hi, I believe that if you change os type to linux, you'll get it. But it
>> could lead to problems with storage drivers as acs will announce it as
>> virtio too.
>>
>> 2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
>> dmitriy.kaluzh...@gmail.com> написал:
>>
>> > Hello,
>> > I was working with templates and find out that Windows templates
>> > automatically gets E1000 interface. Is there any way to change it to
>> > Virtio?
>> >
>> > --
>> >
>> >
>> >
>> >
>> > *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
>> >
>>
>
>
>
> --
>
>
>
> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>



--



*--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*


Re: Primary interface on Windows templates

2017-10-04 Thread Andrija Panic
ps aux | grep VMNAME and you can see differences I guess :) probably next
to none (but I didn't check really)

We run all Windows VMs as "windows PV", but have also implemented
additional "Hyper-V Enlightments flags for KVM" in XML definition for all
"OS types" that have word "Windows" in it (
https://www.linux-kvm.org/images/0/0a/2012-forum-kvm_hyperv.pdf ,
http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html
) - Windows 2008 and Windows 7 likes to crash with BSOD often in busy VMs
without these KVM flags...

Simply install Windows as Windows PV, during boot, insert ISO with recent
VirtIO drivers in order to load SCSI/VirtIO drivers, and you are fine. Upon
boot, again install VirtIO drivers for NIC, and then proceed normally to
generate Windows templatesysprep and the end and you are done.

Best


On 4 October 2017 at 15:00, Dmitriy Kaluzhniy 
wrote:

> As I found - it is hardcoded here -
> libvirtComputingResource.isGuestPVEnabled
> So, there are two ways or I change it in code, or I set OS Type for
> template "Windows PV" or "Other PV". What is the difference between this
> two types?
>
> 2017-10-04 14:36 GMT+03:00 Dmitriy Kaluzhniy  >:
>
> > Hello,
> > Thank you for your answer, Ivan. Yes, I think it will work, but it is
> very
> > unstable variant. I hope it can be done in other way.
> >
> > 2017-10-02 17:46 GMT+03:00 Ivan Kudryavtsev :
> >
> >> Hi, I believe that if you change os type to linux, you'll get it. But it
> >> could lead to problems with storage drivers as acs will announce it as
> >> virtio too.
> >>
> >> 2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
> >> dmitriy.kaluzh...@gmail.com> написал:
> >>
> >> > Hello,
> >> > I was working with templates and find out that Windows templates
> >> > automatically gets E1000 interface. Is there any way to change it to
> >> > Virtio?
> >> >
> >> > --
> >> >
> >> >
> >> >
> >> >
> >> > *​Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
> >> >
> >>
> >
> >
> >
> > --
> >
> >
> >
> > *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
> >
>
>
>
> --
>
>
>
> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>



-- 

Andrija Panić


Re: Advise on multiple PODs network design

2017-10-04 Thread Andrija Panic
Anyone?  I know I'm trying to squeeze some free paid consulting here :),
but trying to understand if PODs makes sense in this situation

Thx

On 2 October 2017 at 10:21, Andrija Panic  wrote:

> Hi guys,
>
> Sorry for long post below...
>
> I was wondering if someone could bring some light for me for multiple PODs
> networking design (L2 vs L3) - idea is to make smaller L2 broadcast domains
> (any other reason?)
>
> We might decide to transition from current single pod, single cluster
> (single zone) to multiple PODs design (or not...) - we will eventually grow
> to over 50 racks worth of KVM hosts (1000+ hosts) so Im trying to
> understand best options to avoid having insanely huge L2 broadcast
> domains...
>
> Mgmt network is routed between pods, that is clear.
>
> We have dedicated primary storage network and Secondary Storage networks
> (vlan interfaces configured locally on all KVM hosts, providing direct L2
> connection obviously, not shared with mgmt.network), and same for Public
> and Guest networks... (Advanced networking in zone, Vxlan used as isolation)
>
> Now with multiple PODs, since Public Network and Guest network is defined
> per Zone level (not POD level), and currently same zone-wide setup for
> Primary Storage... what would be the best way to make this traffic stay
> inside PODs as much as possible and is this possible at all? Perhaps I
> would need to look into multiple zones, not PODs.
>
> My humble conclusion, based on having all dedicated networks, is that I
> need to strech (L2 attach as vlan interface) primary and secondary storage
> network across all racks/PODs, and also need to strech Guest vlan (that
> carry all Guest VXLAN tunnels), and again same for Public Network...and
> this again makes huge broadcast domains and doesn't solve my issue...
> Don't see other option in my head to make networking work across PODs.
>
> Any suggestion is most welcome (and if of any use as info - we dont plan
> for any Xen, VmWare etc, will stay purely with KVM).
>
> Thanks
> Andrija
>



-- 

Andrija Panić


Re: Advise on multiple PODs network design

2017-10-04 Thread Rafael Weingärtner
I think this can cause problems, if not properly managed. Unless you 
concentrate Domains/Users in Pods. Otherwise, you might end up with some 
VMs of the same user/domain/project in different pods, and if they are 
all in the same VPC for instance, we would expect them to be in the same 
broadcast domain.


I think to apply what you want, it may require some designing and 
testing, but it feels feasible with ACS.


On 10/4/2017 5:19 PM, Andrija Panic wrote:

Anyone?  I know I'm trying to squeeze some free paid consulting here :),
but trying to understand if PODs makes sense in this situation

Thx

On 2 October 2017 at 10:21, Andrija Panic  wrote:


Hi guys,

Sorry for long post below...

I was wondering if someone could bring some light for me for multiple PODs
networking design (L2 vs L3) - idea is to make smaller L2 broadcast domains
(any other reason?)

We might decide to transition from current single pod, single cluster
(single zone) to multiple PODs design (or not...) - we will eventually grow
to over 50 racks worth of KVM hosts (1000+ hosts) so Im trying to
understand best options to avoid having insanely huge L2 broadcast
domains...

Mgmt network is routed between pods, that is clear.

We have dedicated primary storage network and Secondary Storage networks
(vlan interfaces configured locally on all KVM hosts, providing direct L2
connection obviously, not shared with mgmt.network), and same for Public
and Guest networks... (Advanced networking in zone, Vxlan used as isolation)

Now with multiple PODs, since Public Network and Guest network is defined
per Zone level (not POD level), and currently same zone-wide setup for
Primary Storage... what would be the best way to make this traffic stay
inside PODs as much as possible and is this possible at all? Perhaps I
would need to look into multiple zones, not PODs.

My humble conclusion, based on having all dedicated networks, is that I
need to strech (L2 attach as vlan interface) primary and secondary storage
network across all racks/PODs, and also need to strech Guest vlan (that
carry all Guest VXLAN tunnels), and again same for Public Network...and
this again makes huge broadcast domains and doesn't solve my issue...
Don't see other option in my head to make networking work across PODs.

Any suggestion is most welcome (and if of any use as info - we dont plan
for any Xen, VmWare etc, will stay purely with KVM).

Thanks
Andrija






--
Rafael Weingärtner