Switching to HEAPPOOLS fixed this problem for us. 

> On 8 Oct 2024, at 21:59, Attila Fogarasi 
> <000005b6fee9abb7-dmarc-requ...@listserv.ua.edu> wrote:
> 
> Glad its fixed.  The simplest explanation is that the application is
> issuing storage requests which are continuously increasing in size over
> time.  With KEEP the free storage in the heap is too small for this new
> request, so the heap is extended for this new area.  With FREE the storage
> that has been freed within the heap will cause the heap to shrink, and the
> new request goes to the end of the heap, at a smaller address.  For a busy
> application even 1% of requests behaving like this will cause the heap to
> grow indefinitely with KEEP and work fine with FREE.  There are more
> complex scenarios that match your symptoms, but you'd need to do some IPCS
> heap analysis or use a good LE-savvy monitor such as SYSVIEW to understand
> the behaviour.
> 
>> On Tue, Oct 8, 2024 at 4:12 AM Eric Erickson <esf...@windstream.net> wrote:
>> 
>> Attila,
>> 
>> Thanks for that pointer, when we changed KEEP to FREE (well not all, but
>> this one) our heap memory issues went away. Now my real question is why
>> didn't the Heap Segments get reused. Since our high water memory reached a
>> much lower mark after the change, it appears our application was not
>> fragmenting the heap memory. So what could have been causing the heap to
>> grow until we blew past 1.5GB and we come crashing down?
>> 
>> Again thanks!
>> 
>> On Fri, 5 Jan 2024 08:58:38 +1100, Attila Fogarasi <fogar...@gmail.com>
>> wrote:
>> 
>>> Sounds like your HEAP options are inappropriate for this application,
>>> classic fragmentation ... e.g. set to KEEP and increment size is too
>>> small.  Check your CEE_RUNOPTS or there could be other LE config/exits
>>> involved.   Suggest you set disp=FREE and work out increment size.
>>> HEAP(initial, increment, location, disp, init24, incr24)
>>> 
>>> On Fri, Jan 5, 2024 at 8:45 AM Eric Erickson <esf...@windstream.net>
>> wrote:
>>> 
>>>> We are in a bit of a quandary here with some memory issues surrounding
>> our
>>>> application. This is a multitasking LE C application running in 31 bit
>> mode
>>>> that utilizes the IBM JSON and EZNOSQL Services. Some of the attributes
>>>> are:
>>>> 
>>>> •       z/OS V2.5 Operation System
>>>> •       POSIX(OFF) - all tasks/subtasks
>>>> •       Single address space (31 Bit Mode)
>>>> •       ATTACHX Multi-tasking model (no pthreads)
>>>> •       Execute as started task – Problem State – Key 4
>>>> •       Drop in/out of supervisor state as needed
>>>> •       3 EZNOSQL Databases are opened at application start and remain
>>>> open until termination
>>>> •       Open EZNOSQL connections tokens are passed to the worker task(s)
>>>> along with the unit of work to be processed
>>>> 
>>>> Our issue is that the total available heap grows until we end up
>>>> exhausting all available memory and inevitable application failure, but
>> the
>>>> key here is that while the total heap grows with every unit of work
>>>> processed by tasks, the in use amount only shows no or only a small
>> (<128
>>>> bytes) increment between units of work. For example, here is a heap
>> report
>>>> (using LE __heaprpt function) example. So we are fairly confident that
>> our
>>>> application code is not leaking memory.
>>>> 
>>>> HeapReport: ZdpQuery @Start  - Total/In Use/Available:   1048576/
>>>> 888160/     160416.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   1048576/
>>>> 888160/     160416.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   1560856/
>>>> 888192/     672664.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   1560856/
>>>> 888192/     672664.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   2073088/
>>>> 888224/    1184864.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   2073088/
>>>> 888224/    1184864.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   2073088/
>>>> 888224/    1184864.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   2073088/
>>>> 888224/    1184864.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> HeapReport: ZdpQuery @Enter  - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> HeapReport: ZdpQuery @Exit   - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> HeapReport: ZdpQuery @Finish - Total/In Use/Available:   2585376/
>>>> 888256/    1697120.
>>>> 
>>>> The @Start and @Finish lines show the heap report results just after the
>>>> task is attached and before it terminates. Each of the @Enter/@Exit
>> lines
>>>> show the heap at the unit of work start and end processing,
>> respectively.
>>>> 
>>>> We are at a loss to explain why the heap keeps growing. We would expect
>>>> that the heap would grow to some high water mark and become stabilized,
>> but
>>>> the total size just keeps growing until the application fails due to
>> out of
>>>> memory condition, even though there is a significant amount of heap
>> storage
>>>> available. Our tasks are returning all the storage they directly
>> allocate
>>>> back to the heap, as indicated by in use at start & end. While there is
>> a
>>>> small increment in the in use number, we think that may just be LE
>> overhead
>>>> in managing the heap, but in any case is generally less than 128 bytes
>> per
>>>> iteration, and only appears then the total heap size increases. What
>> makes
>>>> this example even more interesting, is that we are processing the exact
>>>> same request for each iteration.
>>>> 
>>>> We’ve turned on all the various LE memory analysis options (HEAPCHK,
>>>> RPTSTG) and utilized the LE alternate heap manager to detect overlays,
>>>> corruption, etc.. This pointed us to a couple of minor leaks we plugged
>> but
>>>> has not led us to an answer as to the growing heap. We make heavy use of
>>>> the IBM JSON and EZNOSQL services during processing.
>>>> 
>>>> We are in search of any insight, recommendations as to how to proceed in
>>>> diagnosis this issue.
>>>> 
>>>> ----------------------------------------------------------------------
>>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>>> 
>>> 
>>> ----------------------------------------------------------------------
>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to