Andrea,

On Tue, Aug 29, 2017 at 5:39 PM, Ing. Andrea Vettori
<a.vett...@b2bires.com> wrote:
>> On 29 Aug 2017, at 12:29, Suvendu Sekhar Mondal <suv3...@gmail.com> wrote:
>>
>> On Tue, Aug 29, 2017 at 2:54 PM, Ing. Andrea Vettori
>> <a.vett...@b2bires.com> wrote:
>>> - with a fresh started tomcat instance, the time it takes is around 0,8 
>>> seconds. Most of the time is spent on the two RMI calls the task does.
>>> - with an instance that is running from some time, the time can reach 2/3 
>>> seconds; occasionally 5/6 seconds. Most time is still spent on RMI calls. 
>>> I.e. what slows down are the RMI calls.
>>
>> Sill question, do you mean RMI calls generating from Tomcat is getting
>> slower with time? or, JBoss is taking time to return the response?
>
> Thanks for your help.
> What I see is that the http requests are slower. Looking at the details of 
> that specific request (putting a product in the cart) using System.nanotime I 
> can see that most of the time is spent during the RMI calls. I’m pretty sure 
> it’s not jboss that is slower because doing other calls at the same time with 
> fresh tomcats or with a java client they're not slow.
>

I will still suggest to monitor JBoss closely because it is important
part of your workflow. Please check GC stats also.

>>
>>> - restarting the jvm fixes the issue
>>> - ***it seems*** but I’m still testing this since it seems there’s no 
>>> ‘meatspace gc trigger command available', that when Metaspace is garbage 
>>> collected, tomcat then performs like a fresh instance.
>>>
>>
>> Yes, no command to only garbage collect Metaspace area. Without
>> recycling Tomcat, you can trigger a Full GC either using VisualVM or
>> jcmd's GC.run command.
>
> I think that explicit GC is not doing a metaspace GC… that’s what I observed 
> starting the GC with the java console.
>

In Hotspot VM, Full GC will collect "dead" objects/classes from all
the regions(Young, Old, Metaspace/PermGen). In the provided GC log, I
can see that you had triggered Full GCs but Metaspace did not shrunk
much. That clearly indicates you had good number of classes that were
"in use":

48318.001: [Full GC (Heap Dump Initiated GC)  5232M->1298M(20G), 4.9784080 secs]
   [Eden: 2800.0M(3136.0M)->0.0B(9616.0M) Survivors: 120.0M->0.0B
Heap: 5232.7M(20.0G)->1298.9M(20.0G)], [Metaspace:
1419958K->1398648K(2621440K)]
 [Times: user=9.55 sys=0.09, real=4.98 secs]
48404.270: [Full GC (System.gc())  2825M->1301M(20G), 4.3863751 secs]
   [Eden: 1528.0M(9616.0M)->0.0B(9616.0M) Survivors: 0.0B->0.0B Heap:
2825.2M(20.0G)->1301.6M(20.0G)], [Metaspace:
1425628K->1425623K(2627584K)]
 [Times: user=9.09 sys=0.15, real=4.38 secs]

Sometimes Full GC were able to clean up some space from Metaspace but
only as part of a final last ditch collection effort:

43618.504: [Full GC (Last ditch collection)  1386M->250M(20G), 1.6455823 secs]
   [Eden: 0.0B(6408.0M)->0.0B(6408.0M) Survivors: 0.0B->0.0B Heap:
1386.7M(20.0G)->250.5M(20.0G)], [Metaspace:
1646471K->163253K(1843200K)]
 [Times: user=2.23 sys=0.10, real=1.65 secs]
49034.392: [Full GC (Last ditch collection)  1491M->347M(20G), 1.9965534 secs]
   [Eden: 0.0B(5600.0M)->0.0B(5600.0M) Survivors: 0.0B->0.0B Heap:
1491.1M(20.0G)->347.9M(20.0G)], [Metaspace:
1660804K->156199K(2031616K)]
 [Times: user=2.78 sys=0.10, real=2.00 secs]

>>
>>> Since we’re using more than one tomcat instance (2 in production for this 
>>> website, 1 for development) I can see that the issue is isolated to Tomcat 
>>> or the JVM/Host where it runs because other Tomcat instances behave well at 
>>> the same time one is slow. The same JBoss/Postgres backend is used by java 
>>> batches and fat clients and it does work well with consistent times.
>>>
>>> To clarify: the moment one production tomcat that is running from some time 
>>> finishes the task in 3 seconds, the development tomcat or a fresh started 
>>> production tomcat instance does the same task in less that one second. Note 
>>> that repeating the task gives always consistent results, i.e. the instance 
>>> is running from some time is always slow,  the fresh running instance is 
>>> always fast.
>>>
>>> Tomcat is running with these VM options:
>>>
>>> -Xms20480m -Xmx20480m -Dsun.rmi.dgc.client.gcInterval=3600000 
>>> -Dsun.rmi.dgc.server.gcInterval=3600000 -Xloggc:/tmp/gc.txt 
>>> -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9012 
>>> -Dcom.sun.management.jmxremote.ssl=false 
>>> -Dcom.sun.management.jmxremote.authenticate=false -XX:+PrintGCDetails 
>>> -XX:+PrintGCTimeStamps -XX:+UseG1GC -XX:ReservedCodeCacheSize=1g 
>>> -XX:InitialCodeCacheSize=256m -XX:+UseHugeTLBFS -XX:MetaspaceSize=1g 
>>> -XX:MaxMetaspaceSize=2g
>>>
>>> Some of the options have been recently added (for example the increase in 
>>> code cache  size) but it seems they had no impact on the issue.
>>>
>>
>> Is there any specific reason to trigger Full GC every
>> 60min(-Dsun.rmi.dgc.client.gcInterval=3600000)? I think that's
>> overkill.
>
> That’s a value I saw “everywhere” as suggested value (the default is too 
> short). Note that on the GC logs I don’t see full GCs every hour so maybe 
> that option is not triggering full gc ?

That's is the default value. Please see:
http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/sunrmiproperties.html

>>> Metaspace goes up to 1,6GB before being collected. Value after garbage 
>>> collect is around 200MB. Heap usage is variable, it usually stays under 10G 
>>> and is around 1G after garbage collect.
>>> CPU usage rarely goes over 10%. Loaded classes between 20k and 40k. Active 
>>> sessions around 100/120 for each instance.
>>>
>>
>> That sounds like a healthy behavior to me. That means GC is occurring
>> and Metaspace is getting cleaned up. Why do you think Metaspace is the
>> problem?
>
> Because I can observe that when the metaspace is collected the requests 
> become fast. I observer that a few hours ago, looking at the metaspace graph 
> of the java console and doing requests just after the collect.
>
>> From the given data, my first impression is JVM is not having any
>> memory pressure. If you can post your gc.txt file to this group, I
>> would be happy to check it.
>
> Thanks, here’s a link for gceasy report: 
> http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0YXRzLnR4dC5nei0tMTEtNDktMzQ=
>  
> <http://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTcvMDgvMjkvLS1nY3N0YXRzLnR4dC5nei0tMTEtNDktMzQ=>
> and the txt log file: http://ge.tt/1Un0DLm2
> Please consider that I triggered a Full GC manually two times on this server.
>
>
>>
>> Another thing, is there any way you can measure the processing time in
>> JBoss for each RMI calls?
>>
>
> At the moment I’m not measuring the time on the jboss side but I’m 
> considering using a telemetry product in the future. I’m measuring the time 
> on the client side (tomcat or fat client).
>
> Thank you!
> —
> Andrea Vettori
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to