Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/8/18 11:20 PM, Simon Weller wrote:
> I think these is legacy and a guess back in the day. It was 50 at one point 
> and it was lifted higher a few releases. ago.
> 

I see. I'm about to do a test with a bunch of 128GB hypervisors and
spawning a lot of 128M VMs. Trying to see where the limit might be and
also stress the VR a bit by loading a lot of DHCP entries.

Wido

> 
> 
> 
> 
> From: Ivan Kudryavtsev 
> Sent: Thursday, November 8, 2018 3:58 PM
> To: dev
> Subject: Re: KVM Max Guests Limit
> 
> Hi all, +1 for higher numbers.
> 
> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> 
>> Hi,
>>
>> I see that for KVM we set the limit to 144 guests by default, can
>> anybody tell me why we have this limit set to 144?
>>
>> Searching a bit I found this:
>> https://access.redhat.com/articles/rhel-kvm-limits
>>
>> "This guest limit does not apply to Red Hat Enterprise Linux with
>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
>> Virtualization"
>>
>> There is always a limit somewhere, but why do we set it to 144?
>>
>> I would personally vote for increasing this to 500 or something so that
>> users don't run into it that easily.
>>
>> Also, the log line is printed in DEBUG mode only when a host reaches
>> this limit, so I created a PR to set this to INFO:
>> https://github.com/apache/cloudstack/pull/3013
>>
>> Any input?
>>
>> Wido
>>
> 
> 
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ 
> 


[GitHub] rafaelweingartner closed pull request #19: updated jasypt version for change db password

2018-11-09 Thread GitBox
rafaelweingartner closed pull request #19: updated jasypt version for change db 
password
URL: https://github.com/apache/cloudstack-documentation/pull/19
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/source/adminguide/management.rst b/source/adminguide/management.rst
index d45def1..c395542 100644
--- a/source/adminguide/management.rst
+++ b/source/adminguide/management.rst
@@ -150,7 +150,7 @@ add the encrypted password to
 
.. code:: bash
 
-   # java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar 
\ org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ 
input="newpassword123" password="`cat /etc/cloudstack/management/key`" \ 
verbose=false 
+   # java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.2.jar 
\ org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ 
input="newpassword123" password="`cat /etc/cloudstack/management/key`" \ 
verbose=false 
 
 
 File encryption type


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: KVM Max Guests Limit

2018-11-09 Thread Rafael Weingärtner
Do we need these logical constraints in ACS at all?

On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  wrote:

>
>
> On 11/8/18 11:20 PM, Simon Weller wrote:
> > I think these is legacy and a guess back in the day. It was 50 at one
> point and it was lifted higher a few releases. ago.
> >
>
> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> spawning a lot of 128M VMs. Trying to see where the limit might be and
> also stress the VR a bit by loading a lot of DHCP entries.
>
> Wido
>
> >
> >
> >
> > 
> > From: Ivan Kudryavtsev 
> > Sent: Thursday, November 8, 2018 3:58 PM
> > To: dev
> > Subject: Re: KVM Max Guests Limit
> >
> > Hi all, +1 for higher numbers.
> >
> > чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> >
> >> Hi,
> >>
> >> I see that for KVM we set the limit to 144 guests by default, can
> >> anybody tell me why we have this limit set to 144?
> >>
> >> Searching a bit I found this:
> >> https://access.redhat.com/articles/rhel-kvm-limits
> >>
> >> "This guest limit does not apply to Red Hat Enterprise Linux with
> >> Unlimited Guests. There is no guest limit for Red Hat Enterprise
> >> Virtualization"
> >>
> >> There is always a limit somewhere, but why do we set it to 144?
> >>
> >> I would personally vote for increasing this to 500 or something so that
> >> users don't run into it that easily.
> >>
> >> Also, the log line is printed in DEBUG mode only when a host reaches
> >> this limit, so I created a PR to set this to INFO:
> >> https://github.com/apache/cloudstack/pull/3013
> >>
> >> Any input?
> >>
> >> Wido
> >>
> >
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks LLC
> > Cell RU: +7-923-414-1515
> > Cell USA: +1-201-257-1512
> > WWW: http://bitworks.software/ 
> >
>


-- 
Rafael Weingärtner


Re: KVM Max Guests Limit

2018-11-09 Thread Andrija Panic
afaik not - but I did run once or twice intom perhaps looselym connected
issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
deployment to ACS - so in 1-2 cases I did run into out of memory killer,
crashing my VMs.

It would be great to have some amount of "reserve RAM" for host OS - or
simply have PER HOST RAM disableTreshold setting, similar to cluster level
"cluster.memory.allocated.capacity.disablethreshold", just on host level...

On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner 
wrote:

> Do we need these logical constraints in ACS at all?
>
> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  wrote:
>
> >
> >
> > On 11/8/18 11:20 PM, Simon Weller wrote:
> > > I think these is legacy and a guess back in the day. It was 50 at one
> > point and it was lifted higher a few releases. ago.
> > >
> >
> > I see. I'm about to do a test with a bunch of 128GB hypervisors and
> > spawning a lot of 128M VMs. Trying to see where the limit might be and
> > also stress the VR a bit by loading a lot of DHCP entries.
> >
> > Wido
> >
> > >
> > >
> > >
> > > 
> > > From: Ivan Kudryavtsev 
> > > Sent: Thursday, November 8, 2018 3:58 PM
> > > To: dev
> > > Subject: Re: KVM Max Guests Limit
> > >
> > > Hi all, +1 for higher numbers.
> > >
> > > чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> > >
> > >> Hi,
> > >>
> > >> I see that for KVM we set the limit to 144 guests by default, can
> > >> anybody tell me why we have this limit set to 144?
> > >>
> > >> Searching a bit I found this:
> > >> https://access.redhat.com/articles/rhel-kvm-limits
> > >>
> > >> "This guest limit does not apply to Red Hat Enterprise Linux with
> > >> Unlimited Guests. There is no guest limit for Red Hat Enterprise
> > >> Virtualization"
> > >>
> > >> There is always a limit somewhere, but why do we set it to 144?
> > >>
> > >> I would personally vote for increasing this to 500 or something so
> that
> > >> users don't run into it that easily.
> > >>
> > >> Also, the log line is printed in DEBUG mode only when a host reaches
> > >> this limit, so I created a PR to set this to INFO:
> > >> https://github.com/apache/cloudstack/pull/3013
> > >>
> > >> Any input?
> > >>
> > >> Wido
> > >>
> > >
> > >
> > > --
> > > With best regards, Ivan Kudryavtsev
> > > Bitworks LLC
> > > Cell RU: +7-923-414-1515
> > > Cell USA: +1-201-257-1512
> > > WWW: http://bitworks.software/ 
> > >
> >
>
>
> --
> Rafael Weingärtner
>


-- 

Andrija Panić


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 12:56 PM, Andrija Panic wrote:
> afaik not - but I did run once or twice intom perhaps looselym connected
> issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> deployment to ACS - so in 1-2 cases I did run into out of memory killer,
> crashing my VMs.
> 
> It would be great to have some amount of "reserve RAM" for host OS - or
> simply have PER HOST RAM disableTreshold setting, similar to cluster level
> "cluster.memory.allocated.capacity.disablethreshold", just on host level...
> 


You can do that already, in agent.properties you can set reserved memory.

But I doubt indeed that we need such a limit in ACS at all, why do we
need to limit the amount of Instances on a hypervisor?

Or at least set it to a very high number by default.

Wido

> On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner 
> wrote:
> 
>> Do we need these logical constraints in ACS at all?
>>
>> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  wrote:
>>
>>>
>>>
>>> On 11/8/18 11:20 PM, Simon Weller wrote:
 I think these is legacy and a guess back in the day. It was 50 at one
>>> point and it was lifted higher a few releases. ago.

>>>
>>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
>>> spawning a lot of 128M VMs. Trying to see where the limit might be and
>>> also stress the VR a bit by loading a lot of DHCP entries.
>>>
>>> Wido
>>>



 
 From: Ivan Kudryavtsev 
 Sent: Thursday, November 8, 2018 3:58 PM
 To: dev
 Subject: Re: KVM Max Guests Limit

 Hi all, +1 for higher numbers.

 чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :

> Hi,
>
> I see that for KVM we set the limit to 144 guests by default, can
> anybody tell me why we have this limit set to 144?
>
> Searching a bit I found this:
> https://access.redhat.com/articles/rhel-kvm-limits
>
> "This guest limit does not apply to Red Hat Enterprise Linux with
> Unlimited Guests. There is no guest limit for Red Hat Enterprise
> Virtualization"
>
> There is always a limit somewhere, but why do we set it to 144?
>
> I would personally vote for increasing this to 500 or something so
>> that
> users don't run into it that easily.
>
> Also, the log line is printed in DEBUG mode only when a host reaches
> this limit, so I created a PR to set this to INFO:
> https://github.com/apache/cloudstack/pull/3013
>
> Any input?
>
> Wido
>


 --
 With best regards, Ivan Kudryavtsev
 Bitworks LLC
 Cell RU: +7-923-414-1515
 Cell USA: +1-201-257-1512
 WWW: http://bitworks.software/ 

>>>
>>
>>
>> --
>> Rafael Weingärtner
>>
> 
> 


Re: KVM Max Guests Limit

2018-11-09 Thread Rafael Weingärtner
For me, that seems some restrictions in paid productions. “you are client
type X, then you can start only Y VMs”, and this has been a legacy around
our code base.  We could very much remove this limit (on instance numbers);
I expect operators to know what they are doing, and to monitor closely the
platforms/systems they run. The management of other resources such as RAM,
CPU, and others, I still consider them necessary though.

On Fri, Nov 9, 2018 at 10:03 AM Wido den Hollander  wrote:

>
>
> On 11/9/18 12:56 PM, Andrija Panic wrote:
> > afaik not - but I did run once or twice intom perhaps looselym connected
> > issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> > deployment to ACS - so in 1-2 cases I did run into out of memory killer,
> > crashing my VMs.
> >
> > It would be great to have some amount of "reserve RAM" for host OS - or
> > simply have PER HOST RAM disableTreshold setting, similar to cluster
> level
> > "cluster.memory.allocated.capacity.disablethreshold", just on host
> level...
> >
>
>
> You can do that already, in agent.properties you can set reserved memory.
>
> But I doubt indeed that we need such a limit in ACS at all, why do we
> need to limit the amount of Instances on a hypervisor?
>
> Or at least set it to a very high number by default.
>
> Wido
>
> > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner <
> rafaelweingart...@gmail.com>
> > wrote:
> >
> >> Do we need these logical constraints in ACS at all?
> >>
> >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander 
> wrote:
> >>
> >>>
> >>>
> >>> On 11/8/18 11:20 PM, Simon Weller wrote:
>  I think these is legacy and a guess back in the day. It was 50 at one
> >>> point and it was lifted higher a few releases. ago.
> 
> >>>
> >>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> >>> spawning a lot of 128M VMs. Trying to see where the limit might be and
> >>> also stress the VR a bit by loading a lot of DHCP entries.
> >>>
> >>> Wido
> >>>
> 
> 
> 
>  
>  From: Ivan Kudryavtsev 
>  Sent: Thursday, November 8, 2018 3:58 PM
>  To: dev
>  Subject: Re: KVM Max Guests Limit
> 
>  Hi all, +1 for higher numbers.
> 
>  чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> 
> > Hi,
> >
> > I see that for KVM we set the limit to 144 guests by default, can
> > anybody tell me why we have this limit set to 144?
> >
> > Searching a bit I found this:
> > https://access.redhat.com/articles/rhel-kvm-limits
> >
> > "This guest limit does not apply to Red Hat Enterprise Linux with
> > Unlimited Guests. There is no guest limit for Red Hat Enterprise
> > Virtualization"
> >
> > There is always a limit somewhere, but why do we set it to 144?
> >
> > I would personally vote for increasing this to 500 or something so
> >> that
> > users don't run into it that easily.
> >
> > Also, the log line is printed in DEBUG mode only when a host reaches
> > this limit, so I created a PR to set this to INFO:
> > https://github.com/apache/cloudstack/pull/3013
> >
> > Any input?
> >
> > Wido
> >
> 
> 
>  --
>  With best regards, Ivan Kudryavtsev
>  Bitworks LLC
>  Cell RU: +7-923-414-1515
>  Cell USA: +1-201-257-1512
>  WWW: http://bitworks.software/ 
> 
> >>>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
>


-- 
Rafael Weingärtner


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 1:08 PM, Rafael Weingärtner wrote:
> For me, that seems some restrictions in paid productions. “you are client
> type X, then you can start only Y VMs”, and this has been a legacy around
> our code base.  We could very much remove this limit (on instance numbers);
> I expect operators to know what they are doing, and to monitor closely the
> platforms/systems they run. The management of other resources such as RAM,
> CPU, and others, I still consider them necessary though.
> 

That might be the case indeed. I think this can be removed completely,
but it's a lot of code around this. In the end it is a boolean which
tells if the hypervisor has reached the limit or not.

I will look into it.

Wido

> On Fri, Nov 9, 2018 at 10:03 AM Wido den Hollander  wrote:
> 
>>
>>
>> On 11/9/18 12:56 PM, Andrija Panic wrote:
>>> afaik not - but I did run once or twice intom perhaps looselym connected
>>> issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
>>> deployment to ACS - so in 1-2 cases I did run into out of memory killer,
>>> crashing my VMs.
>>>
>>> It would be great to have some amount of "reserve RAM" for host OS - or
>>> simply have PER HOST RAM disableTreshold setting, similar to cluster
>> level
>>> "cluster.memory.allocated.capacity.disablethreshold", just on host
>> level...
>>>
>>
>>
>> You can do that already, in agent.properties you can set reserved memory.
>>
>> But I doubt indeed that we need such a limit in ACS at all, why do we
>> need to limit the amount of Instances on a hypervisor?
>>
>> Or at least set it to a very high number by default.
>>
>> Wido
>>
>>> On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner <
>> rafaelweingart...@gmail.com>
>>> wrote:
>>>
 Do we need these logical constraints in ACS at all?

 On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander 
>> wrote:

>
>
> On 11/8/18 11:20 PM, Simon Weller wrote:
>> I think these is legacy and a guess back in the day. It was 50 at one
> point and it was lifted higher a few releases. ago.
>>
>
> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> spawning a lot of 128M VMs. Trying to see where the limit might be and
> also stress the VR a bit by loading a lot of DHCP entries.
>
> Wido
>
>>
>>
>>
>> 
>> From: Ivan Kudryavtsev 
>> Sent: Thursday, November 8, 2018 3:58 PM
>> To: dev
>> Subject: Re: KVM Max Guests Limit
>>
>> Hi all, +1 for higher numbers.
>>
>> чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
>>
>>> Hi,
>>>
>>> I see that for KVM we set the limit to 144 guests by default, can
>>> anybody tell me why we have this limit set to 144?
>>>
>>> Searching a bit I found this:
>>> https://access.redhat.com/articles/rhel-kvm-limits
>>>
>>> "This guest limit does not apply to Red Hat Enterprise Linux with
>>> Unlimited Guests. There is no guest limit for Red Hat Enterprise
>>> Virtualization"
>>>
>>> There is always a limit somewhere, but why do we set it to 144?
>>>
>>> I would personally vote for increasing this to 500 or something so
 that
>>> users don't run into it that easily.
>>>
>>> Also, the log line is printed in DEBUG mode only when a host reaches
>>> this limit, so I created a PR to set this to INFO:
>>> https://github.com/apache/cloudstack/pull/3013
>>>
>>> Any input?
>>>
>>> Wido
>>>
>>
>>
>> --
>> With best regards, Ivan Kudryavtsev
>> Bitworks LLC
>> Cell RU: +7-923-414-1515
>> Cell USA: +1-201-257-1512
>> WWW: http://bitworks.software/ 
>>
>


 --
 Rafael Weingärtner

>>>
>>>
>>
> 
> 


[DISCUSS] [CALL TO ARMS] library upgrades

2018-11-09 Thread Daan Hoogland
People, I know this is not a sexy subject but it needs attention. There is
a PR [1] out for preparational work on upgrading antiquated logging
frameworks. It needs attention if only :+1: or an argument not to do it.
Several other similar jobs need doing as well. I wasn't in Montreal so i
don't know if that was discussed and plans where made. I think it is
becoming a very high priority.
What do all of you think?

[1] https://github.com/apache/cloudstack/pull/2992

-- 
Daan


Re: KVM Max Guests Limit

2018-11-09 Thread Andrija Panic
Thanks Wido - though I don't seem to be able to find any related setting
(there is host.overcommit.mem.mb but that is not it - unless you can define
negative value to it )  ?
https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties


thx

On Fri, 9 Nov 2018 at 13:03, Wido den Hollander  wrote:

>
>
> On 11/9/18 12:56 PM, Andrija Panic wrote:
> > afaik not - but I did run once or twice intom perhaps looselym connected
> > issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> > deployment to ACS - so in 1-2 cases I did run into out of memory killer,
> > crashing my VMs.
> >
> > It would be great to have some amount of "reserve RAM" for host OS - or
> > simply have PER HOST RAM disableTreshold setting, similar to cluster
> level
> > "cluster.memory.allocated.capacity.disablethreshold", just on host
> level...
> >
>
>
> You can do that already, in agent.properties you can set reserved memory.
>
> But I doubt indeed that we need such a limit in ACS at all, why do we
> need to limit the amount of Instances on a hypervisor?
>
> Or at least set it to a very high number by default.
>
> Wido
>
> > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner <
> rafaelweingart...@gmail.com>
> > wrote:
> >
> >> Do we need these logical constraints in ACS at all?
> >>
> >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander 
> wrote:
> >>
> >>>
> >>>
> >>> On 11/8/18 11:20 PM, Simon Weller wrote:
>  I think these is legacy and a guess back in the day. It was 50 at one
> >>> point and it was lifted higher a few releases. ago.
> 
> >>>
> >>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> >>> spawning a lot of 128M VMs. Trying to see where the limit might be and
> >>> also stress the VR a bit by loading a lot of DHCP entries.
> >>>
> >>> Wido
> >>>
> 
> 
> 
>  
>  From: Ivan Kudryavtsev 
>  Sent: Thursday, November 8, 2018 3:58 PM
>  To: dev
>  Subject: Re: KVM Max Guests Limit
> 
>  Hi all, +1 for higher numbers.
> 
>  чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :
> 
> > Hi,
> >
> > I see that for KVM we set the limit to 144 guests by default, can
> > anybody tell me why we have this limit set to 144?
> >
> > Searching a bit I found this:
> > https://access.redhat.com/articles/rhel-kvm-limits
> >
> > "This guest limit does not apply to Red Hat Enterprise Linux with
> > Unlimited Guests. There is no guest limit for Red Hat Enterprise
> > Virtualization"
> >
> > There is always a limit somewhere, but why do we set it to 144?
> >
> > I would personally vote for increasing this to 500 or something so
> >> that
> > users don't run into it that easily.
> >
> > Also, the log line is printed in DEBUG mode only when a host reaches
> > this limit, so I created a PR to set this to INFO:
> > https://github.com/apache/cloudstack/pull/3013
> >
> > Any input?
> >
> > Wido
> >
> 
> 
>  --
>  With best regards, Ivan Kudryavtsev
>  Bitworks LLC
>  Cell RU: +7-923-414-1515
>  Cell USA: +1-201-257-1512
>  WWW: http://bitworks.software/ 
> 
> >>>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
>


-- 

Andrija Panić


Re: KVM Max Guests Limit

2018-11-09 Thread Wido den Hollander



On 11/9/18 1:33 PM, Andrija Panic wrote:
> Thanks Wido - though I don't seem to be able to find any related setting
> (there is host.overcommit.mem.mb but that is not it - unless you can
> define negative value to it )  ?
> https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties
> 

host.reserved.mem.mb=32768

That one should be the setting you might want to look at.

In this case 32G is reserved and not available to CS.

Wido

> 
> thx
> 
> On Fri, 9 Nov 2018 at 13:03, Wido den Hollander  > wrote:
> 
> 
> 
> On 11/9/18 12:56 PM, Andrija Panic wrote:
> > afaik not - but I did run once or twice intom perhaps looselym
> connected
> > issue - ACS reports 100% of host RAM (makes sense) asavailable for VM
> > deployment to ACS - so in 1-2 cases I did run into out of memory
> killer,
> > crashing my VMs.
> >
> > It would be great to have some amount of "reserve RAM" for host OS
> - or
> > simply have PER HOST RAM disableTreshold setting, similar to
> cluster level
> > "cluster.memory.allocated.capacity.disablethreshold", just on host
> level...
> >
> 
> 
> You can do that already, in agent.properties you can set reserved
> memory.
> 
> But I doubt indeed that we need such a limit in ACS at all, why do we
> need to limit the amount of Instances on a hypervisor?
> 
> Or at least set it to a very high number by default.
> 
> Wido
> 
> > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner
> mailto:rafaelweingart...@gmail.com>>
> > wrote:
> >
> >> Do we need these logical constraints in ACS at all?
> >>
> >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  > wrote:
> >>
> >>>
> >>>
> >>> On 11/8/18 11:20 PM, Simon Weller wrote:
>  I think these is legacy and a guess back in the day. It was 50
> at one
> >>> point and it was lifted higher a few releases. ago.
> 
> >>>
> >>> I see. I'm about to do a test with a bunch of 128GB hypervisors and
> >>> spawning a lot of 128M VMs. Trying to see where the limit might
> be and
> >>> also stress the VR a bit by loading a lot of DHCP entries.
> >>>
> >>> Wido
> >>>
> 
> 
> 
>  
>  From: Ivan Kudryavtsev  >
>  Sent: Thursday, November 8, 2018 3:58 PM
>  To: dev
>  Subject: Re: KVM Max Guests Limit
> 
>  Hi all, +1 for higher numbers.
> 
>  чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander  >:
> 
> > Hi,
> >
> > I see that for KVM we set the limit to 144 guests by default, can
> > anybody tell me why we have this limit set to 144?
> >
> > Searching a bit I found this:
> > https://access.redhat.com/articles/rhel-kvm-limits
> >
> > "This guest limit does not apply to Red Hat Enterprise Linux with
> > Unlimited Guests. There is no guest limit for Red Hat Enterprise
> > Virtualization"
> >
> > There is always a limit somewhere, but why do we set it to 144?
> >
> > I would personally vote for increasing this to 500 or something so
> >> that
> > users don't run into it that easily.
> >
> > Also, the log line is printed in DEBUG mode only when a host
> reaches
> > this limit, so I created a PR to set this to INFO:
> > https://github.com/apache/cloudstack/pull/3013
> >
> > Any input?
> >
> > Wido
> >
> 
> 
>  --
>  With best regards, Ivan Kudryavtsev
>  Bitworks LLC
>  Cell RU: +7-923-414-1515
>  Cell USA: +1-201-257-1512
>  WWW: http://bitworks.software/ 
> 
> >>>
> >>
> >>
> >> --
> >> Rafael Weingärtner
> >>
> >
> >
> 
> 
> 
> -- 
> 
> Andrija Panić


Re: KVM Max Guests Limit

2018-11-09 Thread Andrija Panic
THx Wido, let me add this to agent.properties template on master, since
it's missing, I have no idea where you got it from (perhaps from code that
uses it)

thx

On Fri, 9 Nov 2018 at 13:35, Wido den Hollander  wrote:

>
>
> On 11/9/18 1:33 PM, Andrija Panic wrote:
> > Thanks Wido - though I don't seem to be able to find any related setting
> > (there is host.overcommit.mem.mb but that is not it - unless you can
> > define negative value to it )  ?
> >
> https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties
> >
>
> host.reserved.mem.mb=32768
>
> That one should be the setting you might want to look at.
>
> In this case 32G is reserved and not available to CS.
>
> Wido
>
> >
> > thx
> >
> > On Fri, 9 Nov 2018 at 13:03, Wido den Hollander  > > wrote:
> >
> >
> >
> > On 11/9/18 12:56 PM, Andrija Panic wrote:
> > > afaik not - but I did run once or twice intom perhaps looselym
> > connected
> > > issue - ACS reports 100% of host RAM (makes sense) asavailable for
> VM
> > > deployment to ACS - so in 1-2 cases I did run into out of memory
> > killer,
> > > crashing my VMs.
> > >
> > > It would be great to have some amount of "reserve RAM" for host OS
> > - or
> > > simply have PER HOST RAM disableTreshold setting, similar to
> > cluster level
> > > "cluster.memory.allocated.capacity.disablethreshold", just on host
> > level...
> > >
> >
> >
> > You can do that already, in agent.properties you can set reserved
> > memory.
> >
> > But I doubt indeed that we need such a limit in ACS at all, why do we
> > need to limit the amount of Instances on a hypervisor?
> >
> > Or at least set it to a very high number by default.
> >
> > Wido
> >
> > > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner
> > mailto:rafaelweingart...@gmail.com>>
> > > wrote:
> > >
> > >> Do we need these logical constraints in ACS at all?
> > >>
> > >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander  > > wrote:
> > >>
> > >>>
> > >>>
> > >>> On 11/8/18 11:20 PM, Simon Weller wrote:
> >  I think these is legacy and a guess back in the day. It was 50
> > at one
> > >>> point and it was lifted higher a few releases. ago.
> > 
> > >>>
> > >>> I see. I'm about to do a test with a bunch of 128GB hypervisors
> and
> > >>> spawning a lot of 128M VMs. Trying to see where the limit might
> > be and
> > >>> also stress the VR a bit by loading a lot of DHCP entries.
> > >>>
> > >>> Wido
> > >>>
> > 
> > 
> > 
> >  
> >  From: Ivan Kudryavtsev  > >
> >  Sent: Thursday, November 8, 2018 3:58 PM
> >  To: dev
> >  Subject: Re: KVM Max Guests Limit
> > 
> >  Hi all, +1 for higher numbers.
> > 
> >  чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander  > >:
> > 
> > > Hi,
> > >
> > > I see that for KVM we set the limit to 144 guests by default,
> can
> > > anybody tell me why we have this limit set to 144?
> > >
> > > Searching a bit I found this:
> > > https://access.redhat.com/articles/rhel-kvm-limits
> > >
> > > "This guest limit does not apply to Red Hat Enterprise Linux
> with
> > > Unlimited Guests. There is no guest limit for Red Hat
> Enterprise
> > > Virtualization"
> > >
> > > There is always a limit somewhere, but why do we set it to 144?
> > >
> > > I would personally vote for increasing this to 500 or
> something so
> > >> that
> > > users don't run into it that easily.
> > >
> > > Also, the log line is printed in DEBUG mode only when a host
> > reaches
> > > this limit, so I created a PR to set this to INFO:
> > > https://github.com/apache/cloudstack/pull/3013
> > >
> > > Any input?
> > >
> > > Wido
> > >
> > 
> > 
> >  --
> >  With best regards, Ivan Kudryavtsev
> >  Bitworks LLC
> >  Cell RU: +7-923-414-1515
> >  Cell USA: +1-201-257-1512
> >  WWW: http://bitworks.software/ 
> > 
> > >>>
> > >>
> > >>
> > >> --
> > >> Rafael Weingärtner
> > >>
> > >
> > >
> >
> >
> >
> > --
> >
> > Andrija Panić
>


-- 

Andrija Panić


Re: KVM Max Guests Limit

2018-11-09 Thread Andrija Panic
Please LGTM if OK... https://github.com/apache/cloudstack/pull/3016

On Fri, 9 Nov 2018 at 14:20, Andrija Panic  wrote:

> THx Wido, let me add this to agent.properties template on master, since
> it's missing, I have no idea where you got it from (perhaps from code that
> uses it)
>
> thx
>
> On Fri, 9 Nov 2018 at 13:35, Wido den Hollander  wrote:
>
>>
>>
>> On 11/9/18 1:33 PM, Andrija Panic wrote:
>> > Thanks Wido - though I don't seem to be able to find any related setting
>> > (there is host.overcommit.mem.mb but that is not it - unless you can
>> > define negative value to it )  ?
>> >
>> https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties
>> >
>>
>> host.reserved.mem.mb=32768
>>
>> That one should be the setting you might want to look at.
>>
>> In this case 32G is reserved and not available to CS.
>>
>> Wido
>>
>> >
>> > thx
>> >
>> > On Fri, 9 Nov 2018 at 13:03, Wido den Hollander > > > wrote:
>> >
>> >
>> >
>> > On 11/9/18 12:56 PM, Andrija Panic wrote:
>> > > afaik not - but I did run once or twice intom perhaps looselym
>> > connected
>> > > issue - ACS reports 100% of host RAM (makes sense) asavailable
>> for VM
>> > > deployment to ACS - so in 1-2 cases I did run into out of memory
>> > killer,
>> > > crashing my VMs.
>> > >
>> > > It would be great to have some amount of "reserve RAM" for host OS
>> > - or
>> > > simply have PER HOST RAM disableTreshold setting, similar to
>> > cluster level
>> > > "cluster.memory.allocated.capacity.disablethreshold", just on host
>> > level...
>> > >
>> >
>> >
>> > You can do that already, in agent.properties you can set reserved
>> > memory.
>> >
>> > But I doubt indeed that we need such a limit in ACS at all, why do
>> we
>> > need to limit the amount of Instances on a hypervisor?
>> >
>> > Or at least set it to a very high number by default.
>> >
>> > Wido
>> >
>> > > On Fri, 9 Nov 2018 at 12:03, Rafael Weingärtner
>> > mailto:rafaelweingart...@gmail.com>>
>> > > wrote:
>> > >
>> > >> Do we need these logical constraints in ACS at all?
>> > >>
>> > >> On Fri, Nov 9, 2018 at 6:57 AM Wido den Hollander <
>> w...@widodh.nl
>> > > wrote:
>> > >>
>> > >>>
>> > >>>
>> > >>> On 11/8/18 11:20 PM, Simon Weller wrote:
>> >  I think these is legacy and a guess back in the day. It was 50
>> > at one
>> > >>> point and it was lifted higher a few releases. ago.
>> > 
>> > >>>
>> > >>> I see. I'm about to do a test with a bunch of 128GB hypervisors
>> and
>> > >>> spawning a lot of 128M VMs. Trying to see where the limit might
>> > be and
>> > >>> also stress the VR a bit by loading a lot of DHCP entries.
>> > >>>
>> > >>> Wido
>> > >>>
>> > 
>> > 
>> > 
>> >  
>> >  From: Ivan Kudryavtsev > > >
>> >  Sent: Thursday, November 8, 2018 3:58 PM
>> >  To: dev
>> >  Subject: Re: KVM Max Guests Limit
>> > 
>> >  Hi all, +1 for higher numbers.
>> > 
>> >  чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander <
>> w...@widodh.nl
>> > >:
>> > 
>> > > Hi,
>> > >
>> > > I see that for KVM we set the limit to 144 guests by default,
>> can
>> > > anybody tell me why we have this limit set to 144?
>> > >
>> > > Searching a bit I found this:
>> > > https://access.redhat.com/articles/rhel-kvm-limits
>> > >
>> > > "This guest limit does not apply to Red Hat Enterprise Linux
>> with
>> > > Unlimited Guests. There is no guest limit for Red Hat
>> Enterprise
>> > > Virtualization"
>> > >
>> > > There is always a limit somewhere, but why do we set it to
>> 144?
>> > >
>> > > I would personally vote for increasing this to 500 or
>> something so
>> > >> that
>> > > users don't run into it that easily.
>> > >
>> > > Also, the log line is printed in DEBUG mode only when a host
>> > reaches
>> > > this limit, so I created a PR to set this to INFO:
>> > > https://github.com/apache/cloudstack/pull/3013
>> > >
>> > > Any input?
>> > >
>> > > Wido
>> > >
>> > 
>> > 
>> >  --
>> >  With best regards, Ivan Kudryavtsev
>> >  Bitworks LLC
>> >  Cell RU: +7-923-414-1515
>> >  Cell USA: +1-201-257-1512
>> >  WWW: http://bitworks.software/ 
>> > 
>> > >>>
>> > >>
>> > >>
>> > >> --
>> > >> Rafael Weingärtner
>> > >>
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> >
>> > Andrija Panić
>>
>
>
> --
>
> Andrija Panić
>


-- 

Andrija Panić


Re: RFC. Marvin tests fail

2018-11-09 Thread Ivan Kudryavtsev
Hello Boris,
During the troubleshooting, I have found that the problem is connected with
my host environment, so I decided to build the Dockerfile which is for
"simulator & client" running altogether.

The Dockerfile itself is, for example, is here:
https://pastebin.com/raw/ZVjtHX7F

The Dockerfile is redesigned from the original simulator the way to be fast
built for QA E2E tests from the point of view:
- java code changes very rarely;
- marvin code changes rarely;
- tests change often.

After being built it can be used like:
Simulator run: docker run --name sim -v my-tmp:/tmp -it --rm simulator:4.11
Tests  run: docker exec -it sim bash /root/docker_run_tests.sh


If the community is interested in then I can supersede original
tools/docker/Dockerfile with that one.

Also, I added .dockerignore to avoid Docker cache invalidation.

So, to sum up, this Dockerfile is for easy Marvin development and testing
in the isolated, stable environment.

Give me your feedback, please. All changes are here:
https://github.com/apache/cloudstack/pull/3012/files



пт, 9 нояб. 2018 г. в 2:07, Boris Stoyanov :

> you could run it with a debugger in PyCharm for example and figure out if
> there’s anything wrong with the code. Please note that these tests are
> successful with KVM, VMware and Xen.
>
> The logs you’ve shared are just results, there’s a failed_plus_exceptions
> file in each test file log, where you could find more detailed info and
> stack trace.
>
> Boris.
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> > On 8 Nov 2018, at 2:37, Ivan Kudryavtsev 
> wrote:
> >
> > Looks like the reason for the failure is in python code, not tests.
> Someone
> > broke tests.
> >
> > ср, 7 нояб. 2018 г. в 19:29, Ivan Kudryavtsev  >:
> >
> >> Well, when I run outside of docker. I see another behavior:
> >> Those OOB tests work well, but I meet errors in other pieces of code:
> >>
> >> *ivan@notebook:/tmp/MarvinLogs/Nov_07_2018_18_53_35_8SWDQD$ grep -P
> >> 'EXCEPTION|FAIL' results.txt *
> >> Tests ha enable/disable feature at cluster and zone level ... ===
> >> TestName: test_ha_configure_enabledisable_across_clusterzones | Status :
> >> EXCEPTION ===
> >> Tests ha resource ownership expiry across multi-mgmt server ... ===
> >> TestName: test_ha_multiple_mgmt_server_ownership | Status : EXCEPTION
> ===
> >> Tests ha FSM transitions for valid healthy host ... === TestName:
> >> test_ha_verify_fsm_available | Status : EXCEPTION ===
> >> Tests ha FSM transitions leading to degraded state ... === TestName:
> >> test_ha_verify_fsm_degraded | Status : EXCEPTION ===
> >> Tests ha FSM transitions for failures leading to fenced state ... ===
> >> TestName: test_ha_verify_fsm_fenced | Status : EXCEPTION ===
> >> Tests ha FSM transitions leading to recovering ... === TestName:
> >> test_ha_verify_fsm_recovering | Status : EXCEPTION ===
> >> === TestName: test_list_zones_metrics | Status : EXCEPTION ===
> >> Tests out-of-band management ownership expiry across multi-mgmt server
> ...
> >> === TestName: test_oobm_multiple_mgmt_server_ownership | Status :
> FAILED ===
> >> FAIL
> >> === TestName: test_ha_configure_enabledisable_across_clusterzones |
> Status
> >> : EXCEPTION ===
> >> === TestName: test_ha_multiple_mgmt_server_ownership | Status :
> EXCEPTION
> >> ===
> >> === TestName: test_ha_verify_fsm_available | Status : EXCEPTION ===
> >> === TestName: test_ha_verify_fsm_degraded | Status : EXCEPTION ===
> >> === TestName: test_ha_verify_fsm_fenced | Status : EXCEPTION ===
> >> === TestName: test_ha_verify_fsm_recovering | Status : EXCEPTION ===
> >> FAIL: Tests out-of-band management ownership expiry across multi-mgmt
> >> server
> >> === TestName: test_oobm_multiple_mgmt_server_ownership | Status : FAILED
> >> ===
> >> FAILED (SKIP=13, errors=6, failures=1)
> >>
> >> I pasted errors to Pastebin: https://pastebin.com/tXw6mRk7
> >>
> >>
> >>
> >>
> >> ср, 7 нояб. 2018 г. в 14:05, Boris Stoyanov <
> boris.stoya...@shapeblue.com
> >>> :
> >>
> >>> Hi Ivan,
> >>> I guess you’re referring to Out of Band management? I think there was a
> >>> simulator provider for those tests and they are reported as passing
> with
> >>> latest KVM test run.
> >>>
> >>> Can you share any logs/exceptions? I don’t see anything wrong with your
> >>> setup on a first glance btw.
> >>>
> >>> Bobby.
> >>>
> >>>
> >>> boris.stoya...@shapeblue.com
> >>> www.shapeblue.com
> >>> Amadeus House, Floral Street, London  WC2E 9DPUK
> >>> @shapeblue
> >>>
> >>>
> >>>
>  On 7 Nov 2018, at 20:52, Ivan Kudryavtsev 
> >>> wrote:
> 
>  Hello, dev team. Now I try to put a PR and need to write marvin test
> for
>  it. I did it in the past, so tried to recall how to run the ecosystem.
> 
>  I started from the document:
> 
> >>>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Marvin+-+Testing+with+Python
> 
>  To run simulator I have built docker image for 4.11:
>