On 3/4/19 9:45 PM, George Xie wrote: > #4 0x00005555558a3d0a in comm_init () at comm.cc:1206 > 1206 fd_table =(fde *) xcalloc(Squid_MaxFD, sizeof(fde)); > (gdb) p Squid_MaxFD > $1 = 1048576 > (gdb) p sizeof(fde) > $2 = 392 > > It seems Squid_MaxFD is way too large, and its value is directly from ulimit: > > # ulimit -n > 1048576 > > therefore, I try to add this option: > > max_filedesc 4096 > > now squid works and only takes ~50m memory. > thanks very much for your help!
Glad you figured it out! Alex. > Xie Shi > On Tue, Mar 5, 2019 at 12:22 PM George Xie <george...@gmail.com> wrote: >> >>> To correct that default >>> behavior, add this: >>> cache_mem 0 >> >> thanks for your advice, but actually, I have tried this option before, >> found no difference. besides, and I have tried `memory_pools off`. >> >>> Furthermore, older Squids, possibly including your no-longer-supported >>> version, may allocate shared memory indexes where none are needed. That >>> might explain why you see your Squid allocating a 392 MB table. >> >> that's fair, I will give squid 4.4 a try later. >> >>> If you want to know what is going on for sure, then configure malloc to >>> dump core on allocation failures and post a stack trace leading to that >>> allocation failure so that we know _what_ Squid was trying to allocate >>> when it ran out of RAM. >> >> hope following backtrace is helpful: >> >> (gdb) bt >> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 >> #1 0x00007ffff562e42a in __GI_abort () at abort.c:89 >> #2 0x0000555555728eb5 in fatal_dump ( >> message=0x555555e764e0 <xcalloc::msg> "xcalloc: Unable to allocate >> 1048576 blocks of 392 bytes!\n") at fatal.cc:113 >> #3 0x0000555555a09837 in xcalloc (n=1048576, sz=sz@entry=392) at >> xalloc.cc:90 >> #4 0x00005555558a3d0a in comm_init () at comm.cc:1206 >> #5 0x0000555555789104 in SquidMain (argc=<optimized out>, >> argv=0x7fffffffed48) >> at main.cc:1481 >> #6 0x000055555568a48b in SquidMainSafe (argv=<optimized out>, >> argc=<optimized out>) >> at main.cc:1261 >> #7 main (argc=<optimized out>, argv=<optimized out>) at main.cc:1254 >> >> >> Xie Shi >> >> >> On Tue, Mar 5, 2019 at 12:34 AM Alex Rousskov >> <rouss...@measurement-factory.com> wrote: >>> >>> On 3/3/19 9:39 PM, George Xie wrote: >>> >>>> Squid version: 3.5.23-5+deb9u1 >>> >>>> http_port 127.0.0.1:3128 >>>> cache deny all >>>> access_log none >>> >>> Unfortunately, this configuration wastes RAM: Squid is not yet smart >>> enough to understand that you do not want any caching and may allocate >>> 256+ MB of memory cache plus supporting indexes. To correct that default >>> behavior, add this: >>> >>> cache_mem 0 >>> >>> Furthermore, older Squids, possibly including your no-longer-supported >>> version, may allocate shared memory indexes where none are needed. That >>> might explain why you see your Squid allocating a 392 MB table. >>> >>> If you want to know what is going on for sure, then configure malloc to >>> dump core on allocation failures and post a stack trace leading to that >>> allocation failure so that we know _what_ Squid was trying to allocate >>> when it ran out of RAM. >>> >>> >>> HTH, >>> >>> Alex. >>> >>> >>>> runs in a container with following Dockerfile: >>>> >>>> FROM debian:9 >>>> RUN apt update && \ >>>> apt install --yes squid >>>> >>>> >>>> the total memory of the host server is very low, only 592m, about 370m >>>> free memory. >>>> if I start squid in the container, squid will abort immediately. >>>> >>>> error messages in /var/log/squid/cache.log: >>>> >>>> >>>> FATAL: xcalloc: Unable to allocate 1048576 blocks of 392 bytes! >>>> >>>> Squid Cache (Version 3.5.23): Terminated abnormally. >>>> CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys >>>> Maximum Resident Size: 47168 KB >>>> >>>> >>>> error message captured with strace -f -e trace=memory: >>>> >>>> [pid 920] mmap(NULL, 411176960, PROT_READ|PROT_WRITE, >>>> MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) >>>> >>>> >>>> it appears that squid (or glibc) tries to allocate 392m memory, which is >>>> larger than host free memory 370m. >>>> but I guess squid don't need that much memory, I have another running >>>> squid instance, which only uses < 200m memory. >>>> the oddest thing is if I run squid on the host (also Debian 9) directly, >>>> not in the container, squid could start and run as normal. >>>> >>>> am I doing something wrong thing here? >>>> >>>> Xie Shi >>>> >>>> _______________________________________________ >>>> squid-users mailing list >>>> squid-users@lists.squid-cache.org >>>> http://lists.squid-cache.org/listinfo/squid-users >>>> >>> >>> _______________________________________________ >>> squid-users mailing list >>> squid-users@lists.squid-cache.org >>> http://lists.squid-cache.org/listinfo/squid-users _______________________________________________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users