btw, I thing I forgot to mention, I am running ATS in a docker container. I
suspect it should not matter, but wanted to provide all the details.

Are there any options that I can enable or commands that I can run to
further diagnose this?

Lief,
We you mentioned disable freelist, did you mean the '-f' option for
traffic_server?

Thanks.

On Sun, Jun 2, 2019 at 5:24 PM Dk Jack <dnj0...@gmail.com> wrote:

> Thanks Leif for responding,
>
> some questions...
>
> How do I turn off the freelist? Could you please elaborate on your RAM
> disk reference? In my setup, HTTP cache is turned off. Would ram disk still
> come into play?
>
> Is your suggestion 'kill -USR1' different from enabling
> 'proxy.config.dump_mem_info_frequency' config? The graphs I posted where
> made by enabling the dump memory config set to 15s frequency and turning
> the dumped stats into time series data.
>
> Again, thanks for taking the time to respond...
>
> On Sun, Jun 2, 2019 at 12:52 PM Leif Hedstrom <zw...@apache.org> wrote:
>
>> Did you try turning off the freelist? If you do, you likely want to use
>> jemalloc or tcmalloc instead.
>>
>> If that stops the “leak”, then it’s likely related to how the RAM disk.
>>
>> The other thing to do is to kill -USR1 and look at the allocator usage.
>> Do that a few times, with a few hours between and compare. There’s a script
>> in the tools directory that lets you “diff” two such memory usage dumps.
>>
>> Cheers,
>>
>> — Leif
>>
>> > On Jun 2, 2019, at 12:41, Dk Jack <dnj0...@gmail.com> wrote:
>> >
>> > Does anyone have an idea of how much memory is allocated for processing
>> > each request? In this particular environment (from the graphs), it looks
>> > like none of the memory allocated is being freed. Strange thing is,
>> with no
>> > change to software of configuration (besides a restart of ats), the
>> memory
>> > consumption has gone up over the past couple of week with no appreciable
>> > increase in traffic volumes. It used to leak about 2-3g a day and now it
>> > has gone up to 12-14G a day.
>> >
>> > Any idea on where to start looking at and what to look for. I've
>> scoured my
>> > plugin code a few times and it looks clean. The memory dumps show it's
>> > happening in the ATS code, using ATS allocators. Any help is
>> appreciated.
>> > Thanks.
>> >
>> >> On Fri, May 31, 2019 at 3:42 PM Dk Jack <dnj0...@gmail.com> wrote:
>> >>
>> >> stats collected via 'traffic_ctl metric ...' commands...
>> >>
>> >> https://www.dropbox.com/s/fmamnvrk5v1dq82/ats_6.2.1.txt?dl=0
>> >>
>> >>> On Fri, May 31, 2019 at 3:41 PM Dk Jack <dnj0...@gmail.com> wrote:
>> >>>
>> >>> No. I only have healthcheck plugin, stats plugin and my plugin.
>> >>>
>> >>> On Fri, May 31, 2019 at 2:45 PM Steve Malenfant <smalenf...@gmail.com
>> >
>> >>> wrote:
>> >>>
>> >>>> Do you have stale while revalidation plug-in? If so, disable.
>> >>>>
>> >>>>> On Fri, May 31, 2019 at 5:42 PM Dk Jack <dnj0...@gmail.com> wrote:
>> >>>>>
>> >>>>> Hi,
>> >>>>> I am running ATS 6.2.1 and I am seeing memory leaks. The link below
>> >>>> shows
>> >>>>> memory dump graphs for a half-hour period (dump freq is 15s). I
>> have a
>> >>>>> custom plugin that's using atscppapi. These graphs are from our
>> >>>> production
>> >>>>> setup where the traffic volume is very high (120M+ req.s/day). We
>> are
>> >>>>> seeing memory growth of 6-8M every minute.
>> >>>>>
>> >>>>> https://www.dropbox.com/s/m03qdzm5iwl7y0w/ats_stats_6.2.1.pdf?dl=0
>> >>>>>
>> >>>>> In my test setup, I don't see the issue. Although, the volume is a
>> lot
>> >>>> less
>> >>>>> in my test setup. I've put debug logs in all places where ATS
>> >>>> allocators
>> >>>>> are showing growth and I see them properly being release in my test
>> >>>> setup.
>> >>>>> Is it possible, ATS is running into some error conditions causing
>> this
>> >>>>> leak? Any pointers on where or how to go about debugging this issue
>> is
>> >>>>> greatly appreciated...
>> >>>>>
>> >>>>> Dk.
>> >>>>>
>> >>>>> PS: I've tried to upgrade to 7.1.6, but I am running into some other
>> >>>>> crashes with my custom plugin enabled.
>> >>>>>
>> >>>>
>> >>>
>>
>>

Reply via email to