Hello,

In the test environment:
ACS 4.2.1 + compute offering (1 cpu, 1GB RAM)

XCP 1.1 - User vm instance (CentOS) sees 32 processors
XS 6.2SP1 - user vm instance (CentOS) sees 1 processor,

I don't have XS 6.0.2 and 6.1 in my lab.




2014-02-05 Daan Hoogland <daan.hoogl...@gmail.com>:

> we see host crashes because of large posts to pool-masters that are
> due to this. The fix reduces the size of these posts. When we device a
> more elegant solution I am all for it but at the moment the fix is the
> most sensible protection against those crashes.
>
> the way this causes crashes is because information is added to the
> post on all vcpus
>  even those that are not actually allocated.
>
> @Joris, please correct me, because I am wrong
>
>
>
> On Wed, Feb 5, 2014 at 8:19 AM, Harikrishna Patnala
> <harikrishna.patn...@citrix.com> wrote:
> > Hi,
> >
> > There are xenserver docs [1][2] that suggests max VCPUs limit for Linux
> VMS is 32.
> >
> > [1] says that 32 VCPUs is the limitation on Linux vms and *a maximum of
> 16 vCPUs are supported by *XenCenter
> >
> > And this value is set in time frame of 4.2 and since then we are using
> this.
> > to be safe I agree to set 16 max VCPU value, but may I know how this
> VCPU-max value effects the HTTP request. I could not find the reason for 32
> VCPUS-max limit in the log
> >
> > [20140204T13:52:17.346Z|error|xenserverhost1|144
> inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP
> error 500 ({ method = POST; uri = /remote_db_access; query = [  ];
> content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie =
> [
> pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e
> ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 })
> >
> >
> >
> > -Harikrishna
> >
> >
> >
> >
> > [1] Section 3.4 in
> http://support.citrix.com/servlet/KbServlet/download/32303-102-691296/guest.pdf
> > [2] http://support.citrix.com/article/CTX134887
> >
> >
> >
> >
> > On 04-Feb-2014, at 9:06 pm, Daan Hoogland <daan.hoogl...@gmail.com>
> wrote:
> >
> >> Hello community,
> >>
> >> This is not the only serious bug Joris has filed today. The other one
> >> is quite some older and we ran into it only now because before we
> >> hadn't been using internal addresses in the 172.16 or 192.168 ranges.
> >> This one is CLOUDSTACK-6020 and may not impede the RC but I think 6023
> >> does. Before I cast my -1 I want to hear some more opinions on it.
> >>
> >> regards,
> >> Daan
> >>
> >> On Tue, Feb 4, 2014 at 4:11 PM, Joris van Lieshout
> >> <jvanliesh...@schubergphilis.com> wrote:
> >>> Hi All,
> >>>
> >>> Today I've submitted this bug:
> https://issues.apache.org/jira/browse/CLOUDSTACK-6023
> >>> Because this bug has stability impact on XenServer I'm mentioning it
> here as well. In my opinion it may impact the current vote on 4.3.
> >>>
> >>> Thanks you
> >>>
> >>> Kind regards,
> >>> Joris van Lieshout
> >>>
> >>>
> >>> Schuberg Philis
> >>> Boeingavenue 271
> >>> 1119 PD  Schiphol-Rijk
> >>> schubergphilis.com
> >>>
> >>> +31 207506672
> >>> +31651428188
> >>> _____________________
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Daan
> >
>
>
>
> --
> Daan
>



-- 
Regards,
Tomasz Zięba
Twitter: @TZieba
LinkedIn: 
pl.linkedin.com/pub/tomasz-zięba-ph-d/3b/7a8/ab6/<http://pl.linkedin.com/pub/tomasz-zi%C4%99ba-ph-d/3b/7a8/ab6/>

Reply via email to