On 15/02/2026 08:45, Brad House wrote:
I've got a squid deployment where serving from cache can be slower than
an uncached download. I'm seeing speeds of around 50MB/s when serving
from cache, which is much slower than anticipated. Infact, when hitting
fast upstream servers, serving of a non-cached asset is faster (even
though its still hitting squid to fetch it).
I'm thinking there's got to be something wrong with my squid
configuration, I'm currently running on Rocky Linux 10 with Squid 6.10-6.
When your networks I/O is faster than disk I/O, it is best not to store
at all.
Like so:
acl fast_servers dst ...
store_miss deny fast_servers
The VM I'm using currently has 4 cores, 16G RAM and 100G of usable
space.
You have configured your Squid to use 318 GB of cache. That will not fit
within 100 GB.
We have a large on site build system that spins up runners for GitHub
actions, and they're constantly fetching large assets from the internet
for each build, hence our desire for a caching proxy. We'd rather not
switch to Apache Traffic Server as that doesn't have SSL bump capability
(we haven't yet enabled that capability in squid, however). Hopefully
there's a simple configuration I'm missing.
In this case I think you want to prevent small objects from being stored
in the disk cache. They can benefit from the fast network speed and
should not inflate your bandwidth use much.
cache_dir ... min-size=102400
Just for testing I was pulling large image via http that is below my max
object size: http://mirrors.edge.kernel.org/ubuntu-releases/20.04.6/
ubuntu-20.04.6-live-server-amd64.iso
Configuration below:
acl public src 0.0.0.0/0
The above is the same as:
acl public src ipv4
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow public
This is bad. You have an Open Proxy.
Even though "public" does not include all the IPv6 range, it does
include every possible IPv4 machine on the Internet.
http_access deny to_localhost
http_access deny to_linklocal
http_access deny all
A series of deny followed by "deny all" is only useful if you are
supplying custom error pages.
http_port 8080
maximum_object_size 2 GB
cache_dir aufs /var/spool/squid 325632 16 256
cache_mem 1000 MB
maximum_object_size_in_memory 102400 KB
coredump_dir /var/spool/squid
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
FYI, All these ...
refresh_pattern deb$ 129600 100% 129600
refresh_pattern udeb$ 129600 100% 129600
refresh_pattern tar.gz$ 129600 100% 129600
refresh_pattern tar.xz$ 129600 100% 129600
refresh_pattern tar.bz2$ 129600 100% 129600
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern changelogs.ubuntu.com\/.* 0 1% 1
... are only useful when the repository service does not obey HTTP/1.1
properly. Otherwise they are detrimental.
Good example, are those package tar/deb files. In a repository, packages
contain their version details in the filename and URL. Once created they
remain unchanged forever.
Whereas, the above rules are forcing Squid to stop using any cached
object and replace it once these files reach 90 days old.
HTH
Amos
_______________________________________________
squid-users mailing list
[email protected]
https://lists.squid-cache.org/listinfo/squid-users