Hello Joran,

"last resort gc" means that there was an allocation failure that a
normal GC could "resolve". Basically you are in a kinda OOM situation.
I am kinda curious what kind of allocation it is. Probably it is some
very big object. It can be that allocation attempt does not correctly
fall into allocating from LO space.

One thing though is that last resort GC can be much more lightweight
for node.js application that it is currently. I doubt 7 GC in a row
are very helpful. As a workaround you can go into
Heap::CollectAllAvailableGarbage and replace everything inside with

CollectGarbage(OLD_POINTER_SPACE, gc_reason);

This should get rid of 7 repetitive GCs. I think for an application
like yours it makes perfect sense to set internal GC limits very high
and let incremental GC crunch things instead of falling back to
non-incremental marking. But there are currently no way to configure
GC like that.
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 12:50 AM, Joran Dirk Greef <jo...@ronomon.com> wrote:
> Max-old-space-size is measured in MB not KB as you suggest.
>
> Further, max-new-space-size makes no difference to the GC trace given above,
> whether it's passed as flag or not, big or small.
>
> On Monday, November 5, 2012 10:21:11 AM UTC+2, Yang Guo wrote:
>>
>> The short answer is: don't mess with GC settings if you don't know what
>> you are doing.
>>
>> The long answer is: new space is the part of the heap where short-living
>> objects are allocated. The GC scans new space on every collection and
>> promotes long-living objects into the old space. You are setting the new
>> space to ~19GB, which takes a while to scan. Furthermore, you are setting
>> the old space to only 19MB, limiting the part of the heap where long-living
>> objects are being moved to, hence the last resort GC. What you probably want
>> is to specify a large old space size, but leave the new space size at
>> default.
>>
>> Yang
>>
>> On Sunday, November 4, 2012 4:19:11 PM UTC+1, Joran Dirk Greef wrote:
>>>
>>> I am running Node v0.8.14 with --nouse_idle_notification --expose_gc
>>> --max_old_space_size=19000 --max_new_space_size=19000000.
>>>
>>> I have a large object used as part of a BitCask style store, keeping a
>>> few million entries.
>>>
>>> Calling gc() manually takes a 3 seconds which is fine as I call it every
>>> 2 minutes.
>>>
>>> The machine has 32GB of RAM and all of this is available to the process,
>>> there is nothing else running.
>>>
>>> The process sits at around 1.9GB of RAM.
>>>
>>> I have found an interesting test case where async reading a 1mb file in
>>> Node takes longer and longer depending on how many entries are in the large
>>> object discussed above:
>>>
>>> Node.fs.readFile('test', 'binary', End.timer())
>>>   347745 ms: Scavenge 1617.4 (1660.4) -> 1611.1 (1660.4) MB, 0 ms
>>> [allocation failure].
>>>   350900 ms: Mark-sweep 1611.5 (1660.4) -> 1512.2 (1633.4) MB, 3153 ms
>>> [last resort gc].
>>>   354072 ms: Mark-sweep 1512.2 (1633.4) -> 1512.0 (1592.4) MB, 3171 ms
>>> [last resort gc].
>>>   357247 ms: Mark-sweep 1512.0 (1592.4) -> 1512.0 (1568.4) MB, 3175 ms
>>> [last resort gc].
>>>   360426 ms: Mark-sweep 1512.0 (1568.4) -> 1512.0 (1567.4) MB, 3178 ms
>>> [last resort gc].
>>>   363620 ms: Mark-sweep 1512.0 (1567.4) -> 1512.0 (1567.4) MB, 3193 ms
>>> [last resort gc].
>>>   366802 ms: Mark-sweep 1512.0 (1567.4) -> 1511.6 (1567.4) MB, 3182 ms
>>> [last resort gc].
>>>   369967 ms: Mark-sweep 1511.6 (1567.4) -> 1511.6 (1567.4) MB, 3164 ms
>>> [last resort gc].
>>> 2012-11-04T14:59:30.700Z INFO 22230ms
>>>
>>> Reading the 1mb file before the large object is created is fast, the
>>> bigger the object becomes the slower the file is to read.
>>>
>>> Why is last resort gc being called if gc is exposed and if the machine
>>> has more than enough RAM?
>>>
>>> What was interesting was that this behabiour does not happen for V8
>>> 3.6.6.25 and earlier.
>>>
>>> The reason I can't use 3.6.6.25 however is that the heap is limited to
>>> 1.9GB and I need more head room than that.
>>>
>>> Is there anyway I can disable the last resort GC?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Reply via email to