Well, it doesn't seem to be actively swapping at this point, but I
think it's got active memory swapped out and being used as
occasionally wait% goes up significantly. At any rate this system is
severely memory limited.

On Wed, Feb 20, 2013 at 9:52 PM, Parth Jagirdar
<parth.jagir...@citrix.com> wrote:
> Marcus,
>
> vmstat 1 output
>
>
> [root@localhost management]# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa st
>  0  1 190820  72380  10904  15852   36   28    40    92    9    1  0  0 89
> 10  0
>  0  0 190820  72256  10932  15828    0    0     0    56   63  130  0  0 88
> 12  0
>  1  0 190820  72256  10932  15844    0    0     0     0   53  153  1  0 99
>  0  0
>  0  0 190820  72256  10948  15844    0    0     0    44   89  253  2  0 88
> 10  0
>  0  0 190820  72256  10964  15828    0    0     0    72   64  135  0  0 88
> 12  0
>  0  0 190820  72256  10964  15844    0    0     0     0   43   76  0  0
> 100  0  0
>  0  0 190820  72256  10980  15844    0    0     0    36   86  244  1  1 91
>  7  0
>  0  0 190820  72256  10996  15828    0    0     0    44   57  112  0  1 88
> 11  0
>  0  0 190820  72256  10996  15844    0    0     0     0   45   88  0  0
> 100  0  0
>  0  0 190820  72256  11012  15844    0    0     0    36  100  264  1  1 91
>  7  0
>  0  0 190820  72132  11044  15824    0    0     4    96  106  211  1  0 80
> 19  0
>  0  0 190820  72132  11092  15856    0    0     0   368   81  223  0  1 74
> 25  0
>  0  0 190820  72132  11108  15856    0    0     0    36   78  145  0  1 93
>  6  0
>  0  0 190820  72132  11124  15840    0    0     0    40   55  106  1  0 90
>  9  0
>  0  0 190820  72132  11124  15856    0    0     0     0   47   96  0  0
> 100  0  0
>  0  0 190820  72132  11140  15856    0    0     0    36   61  113  0  0 85
> 15  0
>  0  0 190820  72008  11156  15840    0    0     0    36   61  158  0  0 93
>  7  0
>  0  0 190820  72008  11156  15856    0    0     0     0   41   82  0  0
> 100  0  0
>  0  0 190820  72008  11172  15856    0    0     0    36   74  149  1  0 94
>  5  0
>  0  0 190820  72008  11188  15840    0    0     0    36   60  117  0  0 93
>  7  0
>  0  0 190820  72008  11188  15856    0    0     0     0   43   91  0  0
> 100  0  0
>  1  0 190820  72008  11252  15860    0    0     4   312  108  243  1  1 68
> 30  0
>  0  0 190820  72008  11268  15844    0    0     0    36   60  128  0  0 92
>  8  0
>  0  0 190820  72008  11268  15860    0    0     0     0   36   67  0  0
> 100  0  0
>  0  0 190820  71884  11284  15860    0    0     0   104   84  139  0  1 83
> 16  0
>  0  0 190820  71884  11300  15844    0    0     0    60   55  111  0  0 69
> 31  0
>  0  0 190820  71884  11300  15860    0    0     0     0   53  121  1  0 99
>  0  0
>  0  0 190820  71884  11316  15860    0    0     0    40   67  130  0  0 87
> 13  0
>  0  0 190820  71884  11332  15844    0    0     0    40   58  130  0  0 90
> 10  0
>  0  0 190820  71884  11332  15864    0    0     0     0   59  824  1  1 98
>  0  0
>  1  0 190820  71884  11348  15864    0    0     0    40  113  185  1  0 67
> 32  0
>  0  0 190820  71744  11412  15852    0    0     4   540  100  238  0  0 67
> 33  0
>  0  0 190820  71744  11412  15868    0    0     0     0   55  159  1  0 99
>  0  0
>  0  0 190820  71744  11428  15868    0    0     0    40   89  246  2  1 90
>  7  0
>  0  0 190820  71620  11444  15852    0    0     0    72   65  135  0  0 93
>  7  0
>  0  0 190820  71620  11444  15868    0    0     0     0   40   74  0  0
> 100  0  0
>  0  0 190820  71620  11460  15868    0    0     0    52   75  216  1  0 92
>  7  0
>  0  0 190820  71620  11476  15852    0    0     0    44   53  109  0  0 89
> 11  0
>  0  0 190820  71620  11476  15868    0    0     0     0   43   87  0  0
> 100  0  0
>  0  0 190820  71620  11496  15868    0    0     4    36   83  143  0  1 90
>  9  0
>  0  0 190820  71620  11512  15852    0    0     0    40   78  869  1  0 91
>  8  0
>  0  1 190820  71628  11524  15856    0    0     0   188   94  145  0  0 87
> 13  0
>  0  0 190820  71496  11576  15872    0    0     4   132   96  214  1  0 80
> 19  0
>  0  0 190820  71496  11592  15856    0    0     0    36   94  128  1  0 92
>  7  0
>  0  0 190820  71496  11592  15872    0    0     0     0  115  164  0  0
> 100  0  0
>  0  0 190820  71496  11608  15876    0    0     0    36  130  200  0  0 87
> 13  0
>  0  0 190820  71496  11624  15860    0    0     0    36  141  218  1  1 91
>  7  0
>  0  0 190820  71504  11624  15876    0    0     0     0  105  119  0  0
> 100  0  0
>  0  0 190820  71504  11640  15876    0    0     0    36  140  218  1  0 90
>  9  0
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu-----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa st
>  0  0 190820  71504  11656  15860    0    0     0    36  131  169  1  0 92
>  7  0
>  0  0 190820  71504  11656  15876    0    0     0     0  115  146  0  0
> 100  0  0
>  0  0 190820  71380  11672  15876    0    0     0    36  128  173  0  1 91
>  8  0
>  0  0 190820  71380  11736  15860    0    0     0   308  146  279  1  0 69
> 30  0
>  0  0 190820  71380  11736  15876    0    0     0     0   59   82  0  0
> 100  0  0
>  0  0 190820  71380  11760  15876    0    0     4    64   90  174  1  1 86
> 12  0
>
> ...Parth
>
>
>
>
> On 2/20/13 8:46 PM, "Parth Jagirdar" <parth.jagir...@citrix.com> wrote:
>
>>JAVA_OPTS="-Djava.awt.headless=true
>>-Dcom.sun.management.jmxremote.port=45219
>>-Dcom.sun.management.jmxremote.authenticate=false
>>-Dcom.sun.management.jmxremote.ssl=false -Xmx512m -Xms512m
>>-XX:+HeapDumpOnOutOfMemoryError
>>-XX:HeapDumpPath=/var/log/cloudstack/management/ -XX:PermSize=256M"
>>
>>Which did not help.
>>
>>--------------
>>
>>[root@localhost management]# cat /proc/meminfo
>>MemTotal:        1016656 kB
>>MemFree:           68400 kB
>>Buffers:            9108 kB
>>Cached:            20984 kB
>>SwapCached:        17492 kB
>>Active:           424152 kB
>>Inactive:         433152 kB
>>Active(anon):     409812 kB
>>Inactive(anon):   417412 kB
>>Active(file):      14340 kB
>>Inactive(file):    15740 kB
>>Unevictable:           0 kB
>>Mlocked:               0 kB
>>SwapTotal:       2031608 kB
>>SwapFree:        1840900 kB
>>Dirty:                80 kB
>>Writeback:             0 kB
>>AnonPages:        815460 kB
>>Mapped:            11408 kB
>>Shmem:                 4 kB
>>Slab:              60120 kB
>>SReclaimable:      10368 kB
>>SUnreclaim:        49752 kB
>>KernelStack:        5216 kB
>>PageTables:         6800 kB
>>NFS_Unstable:          0 kB
>>Bounce:                0 kB
>>WritebackTmp:          0 kB
>>CommitLimit:     2539936 kB
>>Committed_AS:    1596896 kB
>>VmallocTotal:   34359738367 kB
>>VmallocUsed:        7724 kB
>>VmallocChunk:   34359718200 kB
>>HardwareCorrupted:     0 kB
>>AnonHugePages:    503808 kB
>>HugePages_Total:       0
>>HugePages_Free:        0
>>HugePages_Rsvd:        0
>>HugePages_Surp:        0
>>Hugepagesize:       2048 kB
>>DirectMap4k:        6144 kB
>>DirectMap2M:     1038336 kB
>>[root@localhost management]#
>>-----------------------------
>>
>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>
>>
>> 9809 cloud     20   0 2215m 785m 4672 S  0.7 79.1   1:59.40 java
>>
>>
>> 1497 mysql     20   0  700m  15m 3188 S  0.3  1.5  23:04.58 mysqld
>>
>>
>>    1 root      20   0 19348  300  296 S  0.0  0.0   0:00.73 init
>>
>>
>>
>>
>>
>>On 2/20/13 8:26 PM, "Sailaja Mada" <sailaja.m...@citrix.com> wrote:
>>
>>>Hi,
>>>
>>>Cloudstack Java process statistics are given below when it stops
>>>responding are given below :
>>>
>>>top - 09:52:03 up 4 days, 21:43,  2 users,  load average: 0.06, 0.05,
>>>0.02
>>>Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
>>>Cpu(s):  1.7%us,  0.7%sy,  0.0%ni, 97.3%id,  0.3%wa,  0.0%hi,  0.0%si,
>>>0.0%st
>>>Mem:   1014860k total,   947632k used,    67228k free,     5868k buffers
>>>Swap:  2031608k total,   832320k used,  1199288k free,    26764k cached
>>>
>>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>>12559 cloud     20   0 3159m 744m 4440 S  2.3 75.1   6:38.39 java
>>>
>>>Thanks,
>>>Sailaja.M
>>>
>>>-----Original Message-----
>>>From: Marcus Sorensen [mailto:shadow...@gmail.com]
>>>Sent: Thursday, February 21, 2013 9:35 AM
>>>To: cloudstack-dev@incubator.apache.org
>>>Subject: Re: [DISCUSS] Management Server Memory Requirements
>>>
>>>Yes, these are great data points, but so far nobody has responded on that
>>>ticket with the information required to know if the slowness is related
>>>to memory settings or swapping. That was just a hunch on my part from
>>>being a system admin.
>>>
>>>How much memory do these systems have that experience issues? What does
>>>/proc/meminfo say during the issues? Does adjusting the tomcat6.conf
>>>memory settings make a difference (see ticket comments)? How much memory
>>>do the java processes list as resident in top?
>>>On Feb 20, 2013 8:53 PM, "Parth Jagirdar" <parth.jagir...@citrix.com>
>>>wrote:
>>>
>>>> +1 Performance degradation is dramatic and I too have observed this
>>>>issue.
>>>>
>>>> I have logged my comments into 1339.
>>>>
>>>>
>>>> ŠParth
>>>>
>>>> On 2/20/13 7:34 PM, "Srikanteswararao Talluri"
>>>> <srikanteswararao.tall...@citrix.com> wrote:
>>>>
>>>> >To add to what Marcus mentioned,
>>>> >Regarding bug CLOUDSTACK-1339 : I have observed this issue within
>>>> >5-10 min of starting management server and there has been a lot of
>>>> >API requests through automated tests. It is observed that Management
>>>> >server not only slows down but also goes down after a while.
>>>> >
>>>> >~Talluri
>>>> >
>>>> >-----Original Message-----
>>>> >From: Marcus Sorensen [mailto:shadow...@gmail.com]
>>>> >Sent: Thursday, February 21, 2013 7:22
>>>> >To: cloudstack-dev@incubator.apache.org
>>>> >Subject: [DISCUSS] Management Server Memory Requirements
>>>> >
>>>> >When Javelin was merged, there was an email sent out stating that
>>>> >devs should set their MAVEN_OPTS to use 2g of heap, and 512M of
>>>> >permanent memory.  Subsequently, there have also been several e-mails
>>>> >and issues where devs have echoed this recommendation, and presumably
>>>> >it fixed issues. I've seen the MS run out of memory myself and
>>>> >applied those recommendations.
>>>> >
>>>> >Is this what we want to provide in the tomcat config for a package
>>>> >based install as well? It's effectively saying that the minimum
>>>> >requirements for the management server are something like 3 or 4 GB
>>>> >(to be safe for other running tasks) of RAM, right?
>>>> >
>>>> >There is currently a bug filed that may or may not have to do with
>>>> >this, CLOUDSTACK-1339. Users report mgmt server slowness, going
>>>> >unresponsive for minutes at a time, but the logs seem to show
>>>> >business as usual. User reports that java is taking 75% of RAM,
>>>> >depending on what else is going on they may be swapping. Settings in
>>>> >the code for an install are currently at 2g/512M, I've been running
>>>> >this on a 4GB server for awhile now, java is at 900M, but I haven't
>>>> >been pounding it with requests or anything.
>>>> >
>>>> >This bug might not have anything to do with the memory settings, but
>>>> >I figured it would be good to nail down what our minimum requirements
>>>> >are for 4.1
>>>>
>>>>
>>
>

Reply via email to