Hello. I'm a teacher. My computer's OS is Windows 7. Installed SQUID 3.5.27. We
have fifteen computers in our cabinet. The internet bandwidth is 4mb. We need
to cache youtube and fb data of our kids. The youtube cache needs to be saved
for a month, and the facebook to remain for a day. Our confi
I have to say that squid 3.3.11 worked flawlessy since january 2014...
... but I think it is time to upgrade.
My server is dated but it has 16 cores and 32 GB of ram, with less
than 3000 users. Workload is split between 2 identical servers thanks
to a proxy.pac.
I have spinning disks now but I
On 26/09/17 17:59, Eliezer Croitoru wrote:
Hey,
How about using a local bind\unbound DNS server that has a forwarding zone
defined only for the local domains?
For me it's a bit hard to understand the root cause for the issue but this is
the best solution I can think about.
If you need some hel
Hi.
Thanks.
But there is some Time to live, for config in the squid, so the service is
not asking every time for authenticate??
Thanks!
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing
On 27/09/17 01:57, erdosain9 wrote:
Hi.
Thanks.
But there is some Time to live, for config in the squid, so the service is
not asking every time for authenticate??
For Negotiate and NTLM the credentials are supposed to be unique per
connection, so each TCP connection requires separate lookup.
On 26/09/17 23:42, Travel Factory S.r.l. wrote:
I have to say that squid 3.3.11 worked flawlessy since january 2014...
... but I think it is time to upgrade.
...
Can you please suggest some docs to read about the migration path, the
changes needed, the new features of 3.5?
Migration path
On 26/09/17 21:15, Sukhbaatar T wrote:
Hello. I'm a teacher. My computer's OS is Windows 7. Installed SQUID
3.5.27. We have fifteen computers in our cabinet. The internet bandwidth
is 4mb. We need to cache youtube and fb data of our kids. The youtube
cache needs to be saved for a month, and the
but, why so slow then???
"
For Negotiate and NTLM the credentials are supposed to be unique per
connection, so each TCP connection requires separate lookup. But
followup pipelined requests on a connection should not need auth helper
lookups as they share the already authenticated credentials.
*gr
On 27/09/17 02:59, erdosain9 wrote:
but, why so slow then???
What is so slow *exactly*?
The report you posted only tells about the initial lookups. Not the
cached or pipelined results.
Amos
___
squid-users mailing list
squid-users@lists.squid-ca
Sorry, this is part of my config
###Kerberos Auth with ActiveDirectory###
auth_param negotiate program /lib64/squid/negotiate_kerberos_auth -s
HTTP/squid.domain@domain.lan
auth_param negotiate children 45 startup=0 idle=1
auth_param negotiate keep_alive on
external_acl_type i-full %LOGIN /us
When I use my squid proxy server within my Java program I get the following
error
"Reading gzip encodec content failed.
java.util.zip.ZipException: Not in GZIP format"
This does not happen when I use other proxies or if I don't use a proxy at
all. Why is this happening? how can I fix the issue? I
On Tuesday 26 September 2017 at 19:43:28, xpro6000 wrote:
> When I use my squid proxy server within my Java program I get the following
> error
>
> "Reading gzip encodec content failed.
> java.util.zip.ZipException: Not in GZIP format"
>
> This does not happen when I use other proxies or if I do
As Mr. Alex Rousskov suggested, the problem was the regex itself. He provided
me a modified regex (more simple) and now the filter is working.
My regex: ((?=.*\biphone\b)|(?=.*\bprod\b)).*\.facebook\.com(\:|\d|)
Alex's regex: \b(iphone|prod)\b.*\.facebook\.com
Using https://regex101.com/ bot
Just a followup. Thanks to Amos which suggested setting
sslflags=NO_DEFAULT_CA on the http_port(s). That seems to have fixed
the memory (leak??) problem. Probably should run this for a few days
to be sure, but at least now I can run squid for a few hours and the
memory is much more stable vs. be
Hey All,
I have been working on couple tools which are using my drbl-peer library.
- external acl helper
- dns blacklist server
- and couple others..
I took a dns proxy server named grimd and upgraded it since the developer
didn't responded fast enough.
This dns proxy has a nice feature that allo
On 23/09/17 03:48, Heiler Bemerguy wrote:
Amos, talking about delay pools, I have a question: does it work if the
content being served is on a cache peer?
It should, yes. Peers are no different than any other server in terms of
I/O bytes.
The only thing I'm aware of in current Squid is may
On 23/09/17 04:30, Alex Gutiérrez Martínez wrote:
Pool #3 requires the domain name of a single transaction to
simultaneously be *mail.yahoo.com AND *.linkedin.com AND *.youtube.com
Obviously that is impossible, so nothing can match the line that allows.
Pool #1 should match a few things. But
Hey,
My recommendation about YouTube Caching is to use a special server that will
store the YouTube videos locally.
I have created such a service which run's in on a Linux box and you can see the
details at:
http://gogs.ngtech.co.il/elicro/youtube-store
I have not completed every tool that I wa
On 27/09/17 10:30, Aaron Turner wrote:
Doing some basic system tests and we're seeing a bunch of errors like:
2017/09/22 22:43:15 kid1| Bug: Missing MemObject::storeId value
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d0a2a70 nodes.start() 0x7f169c6cc9d0
2017/09/22 22:43:15 kid1| mem_hdr: 0x7f169d
So reading the bug comments it doesn't sound like there's any config
changes I can make (other then not use rock, which in smp doesn't
sound like a good idea). I might be able to run ALL,9 and collect
the output... would need to sanitize the URL's due to privacy/security
concerns. Anything else
On 27/09/17 12:55, Aaron Turner wrote:
So reading the bug comments it doesn't sound like there's any config
changes I can make (other then not use rock, which in smp doesn't
sound like a good idea). I might be able to run ALL,9 and collect
the output... would need to sanitize the URL's due to
21 matches
Mail list logo