ult Node 16 heap size
>>>> is 2GB (verified with node -p 'v8.getHeapStatistics()'), and, originally,
>>>> we were running into Node core dumps due to out-of-memory exceptions. At
>>>> that time, I set "max_memory_restart": "1800M&quo
)'), and, originally,
>>> we were running into Node core dumps due to out-of-memory exceptions. At
>>> that time, I set "max_memory_restart": "1800M" so that the Node processes
>>> would be restarted before a core dump. However, after enabling se
>> Sean
>>
>>
>>
>> *From: *dspac...@googlegroups.com on behalf
>> of Javi Rojo Díaz
>> *Date: *Monday, October 23, 2023 at 3:34 AM
>> *To: *DSpace Technical Support
>> *Subject: *Re: [dspace-tech] PM2 uses a lot of memory and causes
>&
e dumps stopped and our Node process runtimes
> (before restart) changed from roughly 45 minutes to many days.
>
> --
>
> Sean
>
>
>
> *From: *dspac...@googlegroups.com on behalf
> of Javi Rojo Díaz
> *Date: *Monday, October 23, 2023 at 3:34 AM
> *To: *DSpace T
core dumps stopped and our Node process runtimes (before restart) changed from
roughly 45 minutes to many days.
--
Sean
From: dspace-tech@googlegroups.com on behalf of
Javi Rojo Díaz
Date: Monday, October 23, 2023 at 3:34 AM
To: DSpace Technical Support
Subject: Re: [dspace-tech] PM2 uses a
Thank you very much for your response!
For now, what I've done is to limit the memory usage for each core/node of
PM2 cluster to 500MB. So, if I have 8 nodes in the cluster, the maximum
memory used by PM2 should be 500MB x 8 nodes = 5GB RAM. This seems like a
reasonable memory usage. Since I