Hi Amos, cache_mem 0 cache deny all
already there. Regarding number of nic ports we have 4 10G eth cards 2 in each bonding interface. Well, entire config would be way too long but here is the static part: via off cpu_affinity_map process_numbers=1 cores=2 forwarded_for delete visible_hostname squid1 pid_filename /var/run/squid1.pid icp_port 0 htcp_port 0 icp_access deny all htcp_access deny all snmp_port 0 snmp_access deny all dns_nameservers x.x.x.x cache_mem 0 cache deny all pipeline_prefetch on memory_pools off maximum_object_size 16 KB maximum_object_size_in_memory 16 KB ipcache_size 0 cache_store_log none half_closed_clients off include /etc/squid/rules access_log /var/log/squid/squid1-access.log cache_log /var/log/squid/squid1-cache.log coredump_dir /var/spool/squid/squid1 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl port0 myport 30000 http_access allow testhost tcp_outgoing_address x.x.x.x port0 include is there for basic ACL - safe ports and so on - to minimize config file footprint since it's static and same for every worker. and so on 44 more times in this config file Do you know of any good article hot to tune kernel locking or have any idea why is it happening? I cannot find any good info on it and all I've found are bits and peaces of kernel source code. Tnx. J. 2015-07-31 0:42 GMT+02:00 Amos Jeffries <squ...@treenet.co.nz>: > On 31/07/2015 8:05 a.m., Josip Makarevic wrote: > > Hi, > > > > I have a problem with squid setup (squid version 3.5.6, built from > source, > > centos 6.6) > > I've tried 2 options: > > 1. SMP > > 2. NON-SMP > > > > I've decided to stick with custom build non-smp version and the thing is: > > - i don't need cache - any kind of it > > cache_mem 0 > cache deny all > > That is it. All other caches used by Squid *are* mandatory for good > performance. And are only used anyway when the component that needs them > is actively used. > > > > - I have DNS cache just for that > > - squid has to listen on 1024 ports on 23 instances. > > each instance listens on set of ports and each port has different > outgoing > > ip address. > > And how many NIC do you have that spread over? > > > > > The thing is this: > > It's alll good until we hit it with more than 150mbits then... > > > > (output from perf top) > > 84.57% [kernel] [k] osq_lock > > 4.62% [kernel] [k] mutex_spin_on_owner > > 1.41% [kernel] [k] memcpy > > 0.79% [kernel] [k] inet_dump_ifaddr > > 0.62% [kernel] [k] memset > > > > 21:53:39 up 7 days, 10:38, 1 user, load average: 24.01, 23.84, 23.33 > > (yes, we have 24 cores) > > Same behavior is with SMP and NON-SMP setup (SMP setup is all in one file > > with workers 23 option but then I have to use rock cache) > > > > so, my question is....what...how to optimize this.....whatever....I'm > stuck > > for days, I've tried many sysctl options but none of them works. > > Any help, info, something else? > > None of those are Squid functionality. If you want help optimizing your > config and are willing to post it to the list I am happy to do a quick > audit and point out any problem areas for you. > > But tuning the internal locking code of the kernel is way off topic. > > Amos > > _______________________________________________ > squid-users mailing list > squid-users@lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-users >
_______________________________________________ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users