Yeah, I can't get the management server to even function for a long
period of time with those memory settings, it throws out of memory
exceptions. I ran into that with my devcloud Xen last Friday as I run
the management server inside of it and dom0 only as 1GB of RAM.
Increasing to 1.5GB was enough to get by.

Unless anyone has some java tricks or insight into the spring
framework that can improve memory use (I'm making an assumption that
this was due to spring based on prior discussions), it seems like we
should probably set a minimum of 2GB for management server, with 4GB
being recommended, since many people will run their mysql on the same
host.


On Wed, Feb 20, 2013 at 9:46 PM, Parth Jagirdar
<parth.jagir...@citrix.com> wrote:
> JAVA_OPTS="-Djava.awt.headless=true
> -Dcom.sun.management.jmxremote.port=45219
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
> -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>
> Which did not help.
>
> --------------
>
> [root@localhost management]# cat /proc/meminfo
> MemTotal:        1016656 kB
> MemFree:           68400 kB
> Buffers:            9108 kB
> Cached:            20984 kB
> SwapCached:        17492 kB
> Active:           424152 kB
> Inactive:         433152 kB
> Active(anon):     409812 kB
> Inactive(anon):   417412 kB
> Active(file):      14340 kB
> Inactive(file):    15740 kB
> Unevictable:           0 kB
> Mlocked:               0 kB
> SwapTotal:       2031608 kB
> SwapFree:        1840900 kB
> Dirty:                80 kB
> Writeback:             0 kB
> AnonPages:        815460 kB
> Mapped:            11408 kB
> Shmem:                 4 kB
> Slab:              60120 kB
> SReclaimable:      10368 kB
> SUnreclaim:        49752 kB
> KernelStack:        5216 kB
> PageTables:         6800 kB
> NFS_Unstable:          0 kB
> Bounce:                0 kB
> WritebackTmp:          0 kB
> CommitLimit:     2539936 kB
> Committed_AS:    1596896 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:        7724 kB
> VmallocChunk:   34359718200 kB
> HardwareCorrupted:     0 kB
> AnonHugePages:    503808 kB
> HugePages_Total:       0
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> DirectMap4k:        6144 kB
> DirectMap2M:     1038336 kB
> [root@localhost management]#
> -----------------------------
>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>
>
>  9809 cloud     20   0 2215m 785m 4672 S  0.7 79.1   1:59.40 java
>
>
>  1497 mysql     20   0  700m  15m 3188 S  0.3  1.5  23:04.58 mysqld
>
>
>     1 root      20   0 19348  300  296 S  0.0  0.0   0:00.73 init
>
>
>
>
>
> On 2/20/13 8:26 PM, "Sailaja Mada" <sailaja.m...@citrix.com> wrote:
>
>>Hi,
>>
>>Cloudstack Java process statistics are given below when it stops
>>responding are given below :
>>
>>top - 09:52:03 up 4 days, 21:43,  2 users,  load average: 0.06, 0.05, 0.02
>>Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
>>Cpu(s):  1.7%us,  0.7%sy,  0.0%ni, 97.3%id,  0.3%wa,  0.0%hi,  0.0%si,
>>0.0%st
>>Mem:   1014860k total,   947632k used,    67228k free,     5868k buffers
>>Swap:  2031608k total,   832320k used,  1199288k free,    26764k cached
>>
>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>12559 cloud     20   0 3159m 744m 4440 S  2.3 75.1   6:38.39 java
>>
>>Thanks,
>>Sailaja.M
>>
>>-----Original Message-----
>>From: Marcus Sorensen [mailto:shadow...@gmail.com]
>>Sent: Thursday, February 21, 2013 9:35 AM
>>To: cloudstack-dev@incubator.apache.org
>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>
>>Yes, these are great data points, but so far nobody has responded on that
>>ticket with the information required to know if the slowness is related
>>to memory settings or swapping. That was just a hunch on my part from
>>being a system admin.
>>
>>How much memory do these systems have that experience issues? What does
>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>memory settings make a difference (see ticket comments)? How much memory
>>do the java processes list as resident in top?
>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <parth.jagir...@citrix.com>
>>wrote:
>>
>>> +1 Performance degradation is dramatic and I too have observed this
>>>issue.
>>>
>>> I have logged my comments into 1339.
>>>
>>>
>>> ŠParth
>>>
>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>> <srikanteswararao.tall...@citrix.com> wrote:
>>>
>>> >To add to what Marcus mentioned,
>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>> >5-10 min of starting management server and there has been a lot of
>>> >API requests through automated tests. It is observed that Management
>>> >server not only slows down but also goes down after a while.
>>> >
>>> >~Talluri
>>> >
>>> >-----Original Message-----
>>> >From: Marcus Sorensen [mailto:shadow...@gmail.com]
>>> >Sent: Thursday, February 21, 2013 7:22
>>> >To: cloudstack-dev@incubator.apache.org
>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>> >
>>> >When Javelin was merged, there was an email sent out stating that
>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>> >permanent memory.  Subsequently, there have also been several e-mails
>>> >and issues where devs have echoed this recommendation, and presumably
>>> >it fixed issues. I've seen the MS run out of memory myself and
>>> >applied those recommendations.
>>> >
>>> >Is this what we want to provide in the tomcat config for a package
>>> >based install as well? It's effectively saying that the minimum
>>> >requirements for the management server are something like 3 or 4 GB
>>> >(to be safe for other running tasks) of RAM, right?
>>> >
>>> >There is currently a bug filed that may or may not have to do with
>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>> >unresponsive for minutes at a time, but the logs seem to show
>>> >business as usual. User reports that java is taking 75% of RAM,
>>> >depending on what else is going on they may be swapping. Settings in
>>> >the code for an install are currently at 2g/512M, I've been running
>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>> >been pounding it with requests or anything.
>>> >
>>> >This bug might not have anything to do with the memory settings, but
>>> >I figured it would be good to nail down what our minimum requirements
>>> >are for 4.1
>>>
>>>
>

Reply via email to