Hi everyone,
May be we should remove -XX:-UseGCOverheadLimit option from 
maven-surefire-plugin args and increase -Xmx to 1536m for forks?
We have about 4 GB RAM and 2 cores at test VMs. I think we can make test faster 
than now. When I tried testing flink-runtime some tests work too slow due to GC 
overhead.
May be you also faced to problem when Travis build was fallen by timeout?
Also we can use GC algorithms explicitly for forks execution.
BTW, we run tests with java 7 and 8 and these versions use by default different 
GC algorithms (GC1 for 8 and Parallel GC for 7). IMHO when we have strict 
limitations of RAM and time of build we should avoid any ambiguity.
In case when some tests can generate very big datasets very fast, paralel GC 
can do not have time to clean up. I do not know how G1 work in this case 
exactly, but may be would better use old good -XX:+UseSerialGC. We have only 1 
core per fork so we anyway cant use all advantages of G1 and ParralelGC. If we 
use SerialGC (use stop the world) we can be sure that GC collect almost all 
garbage before test continue.
What do you think about my idea?
May be someone has another ideas how to improve tests time and stability?


Dmytro Shkvyra
Senior Software Engineer

Office: +380 44 390 5457<tel:+380%2044%20390%205457> x 65346<tel:65346>   Cell: 
+380 50 357 6828<tel:+380%2050%20357%206828>   Email: 
dmytro_shkv...@epam.com<mailto:dmytro_shkv...@epam.com>
Kyiv, Ukraine (GMT+3)   epam.com<http://www.epam.com>

CONFIDENTIALITY CAUTION AND DISCLAIMER
This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the message to the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. All unintended recipients are obliged to 
delete this message and destroy any printed copies.

Reply via email to