[squid-users] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so I'd 
expect memory usage to be fairly low (and it was under Squid 3.5), but 
some workers are growing pretty large.  I'm using ICAP and SSL bump.


I've got a worker using 5 GB which I've collected memory stats from - 
the things which stand out are:

 - Long Strings: 220 MB
 - Short Strings: 2.1 GB
 - Comm::Connection: 217 MB
 - HttpHeaderEntry: 777 MB
 - MemBlob: 773 MB
 - Entry: 226 MB

What's the best way of debugging this?  It there a way to list all of 
the Comm::Connection objects?


Thanks.

--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill

On 08/10/2021 15:50, Alex Rousskov wrote:


It there a way to list all of the Comm::Connection objects?


The exact answer is "no", but you can use mgr:filedescriptors as an
approximation.


I've had to restart this process now (but I'm sure the problem will be 
back next week).  I did use netstat on it though, and the number of 
established TCP connections was 1090 - that is obviously made up of 
client->proxy, proxy->origin and proxy->icap connections - my gut 
feeling was that it wasn't enough connections to account for 200-odd MB 
of Comm::Connection objects.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage

2021-10-08 Thread Steve Hill

On 08/10/2021 10:24, Ralf Hildebrandt wrote:


I'm seeing high memory usage on Squid 5.1.  Caching is disabled, so I'd
expect memory usage to be fairly low (and it was under Squid 3.5), but some
workers are growing pretty large.  I'm using ICAP and SSL bump.


https://bugs.squid-cache.org/show_bug.cgi?id=5132
is somewhat related


I'm not sure if its the same thing.  In that bug, Alex said it looked 
like Squid wasn't maintaining counters for the leaked memory, whereas in 
my case the "Total" row in mgr:mem reasonably closely tracks the memory 
usage reported by top, so it looks like it should be accounted for.


There are similarities though - lots of memory going to HttpHeaderEntry 
and Short Strings in both cases.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [SPAM] [ext] Squid 5.1 memory usage

2021-10-15 Thread Steve Hill

On 12/10/2021 09:34, Ralf Hildebrandt wrote:


Quite sure, since I've been testing Squid-5-HEAD before it became 5.2
But to be sure, I'm deplyoing it right now.


Yep, squid-5.2 is also leaking.


:(

I'm now reasonably sure that mine is a recurrence of:
https://bugs.squid-cache.org/show_bug.cgi?id=4526
...which I had thought to have gone away in Squid 5.1.  I will apply the 
patch next week and see if the problem goes away again.



--
- Steve Hill
   Technical Director | Cyfarwyddwr Technegol
   OpendiumOnline Safety & Web Filtering http://www.opendium.com
   Diogelwch Ar-Lein a Hidlo Gwefan

   Enquiries | Ymholiadau:   sa...@opendium.com +44-1792-824568
   Support   | Cefnogi:  supp...@opendium.com   +44-1792-825748


Opendium Limited is a company registered in England and Wales.
Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru.

Company No. | Rhif Cwmni:   5465437
Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England.
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] High memory usage associated with ssl_bump and broken clients

2017-09-08 Thread Steve Hill


I've identified a problem with Squid 3.5.26 using a lot of memory when 
some broken clients are on the network.  Strictly speaking this isn't 
really Squid's fault, but it is a denial of service mechanism so I 
wonder if Squid can help mitigate it.


The situation is this:

Squid is set up as a transparent proxy performing SSL bumping.
A client makes an HTTPS connection, which Squid intercepts.  The client 
sends a TLS client handshake and squid responds with a handshake and the 
bumped certificate.  The client doesn't like the bumped certificate, but 
rather than cleanly aborting the TLS session and then sending a TCP FIN, 
it just tears down the connection with a TCP RST packet.


Ordinarily, Squid's side of the connection would be torn down in 
response to the RST, so there would be no problem.  But unfortunately, 
under high network loads the RST packet sometimes gets dropped and as 
far as Squid is concerned the connection never gets closed.


The busted clients I'm seeing the most problems with retry the 
connection immediately rather than waiting for a retry timer.



Problems:
1. A connection that hasn't completed the TLS handshake doesn't appear 
to ever time out (in this case, the server handshake and certificate 
exchange has been completed, but the key exchange never starts).


2. If the client sends an RST and the RST is lost, the client won't send 
another RST until Squid sends some data to it on the aborted connection. 
 In this case, Squid is waiting for data from the client, which will 
never come, and will not send any new data to the client.  Squid will 
never know that the client aborted the connection.


3. There is a lot of memory associated with each connection - my tests 
suggest around 1MB.  In normal operation these kinds of dead connections 
can gradually stack up, leading to a slow but significant memory "leak"; 
when a really badly behaved client is on the network it can open tens of 
thousands of connections per minute and the memory consumption brings 
down the server.


4. We can expect similar problems with devices on flakey network 
connections, even when the clients are well behaved.



My thoughts:
Connections should have a reasonably short timeout during the TLS 
handshake - if a client hasn't completed the handshake and made an HTTP 
request over the encrypted connection within a few seconds, something is 
broken and Squid should tear down the connection.  These connections 
certainly shouldn't be able to persist forever with neither side sending 
any data.



Testing:
I wrote a Python script that makes 1000 concurrent connections as 
quickly as it can and send a TLS client handshake over them.  Once all 
of the connections are open, it then waits for responses from Squid 
(which would contain the server handshake and certificate) and quits, 
tearing down all of the the connections with an RST.


It seems that the RST packets for around 300 of those connections were 
dropped - this sounds surprising, but since all 1000 connections were 
aborted simultaneously, there would be a flood of RST packets and its 
probably reasonable to expect a significant number to be dropped.  The 
end result was that netstat showed Squid still had about 300 established 
connections, which would never go away.


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump memory leak

2016-02-23 Thread Steve Hill
On 23/02/16 17:30, Amos Jeffries wrote:

> And a leak (real or pseudo) means they are still hanging around in
> memory for some reason other than cert-cache references (being in the
> cache by definition is not-leaking). For example as part of active TLS
> sessions when the core was produced.

Seems pretty unlikely that there were over 130 thousand active TLS
sessions in just one of 2 worker threads at the time the core was generated.

I'm seeing Squid processes continually increase to many gigabytes in
size before I have to restart them to avoid the servers ending up deep
in swap.  If this was just things held during "active sessions" I would
expect to see the memory freed up again over night when there isn't much
traffic - I see no such reduction in memory usage.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube "challenges"

2016-02-24 Thread Steve Hill

On 23/02/16 05:01, Darren wrote:


AI am putting together a config to allow the kids to access selected
videos in YouTube from a page of links on a local server.

I am serving up the YouTube links in the  format that is used
for embedding and they play embedded on a page from a local server.

The issues is that YouTube is "doing the world a favor" by enforcing
HTTPS connections from within the code it services into the iframe so I
can't see anything that goes on and need to allow CONNECT to YouTube via
squid or I don't get any video.

I want to make sure the kids don't stray out of the selected library and
I don't want them being able to go onto https://www.youtube.com (the the
CONNECT ACL)


Two options:

1. Use the Google Apps / YouTube Restricted Mode integration.  There's 
some info here:

http://www.opendium.com/node/46
https://support.google.com/a/topic/6206681

2. SSL bump the connection and do some slightly painful real-time 
analysis of the data.


For what its worth, we sell filtering systems to schools across the UK 
and as far as I know, our product is the only one available that can do 
the latter.

See: http://www.opendium.com/node/41


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump memory leak

2016-02-24 Thread Steve Hill

On 23/02/16 21:28, Amos Jeffries wrote:


Ah, you said "a small number" of wiki cert strings with those details. I
took that as meaning a small number of definitely squid generated ones
amidst the 130K indeterminate ones leaking.


Ah, a misunderstanding on my part - sorry.  Yes, there were 302 strings 
containing "signTrusted" (77 of them unique), all of them appear to be 
server certificates (i.e. with a CN containing a domain name), so it is 
possibly reasonable to assume that they were for in-progress sessions 
and would therefore be cleaned up.


This leaves around 131297 other subject/issuer strings (581 unique) 
which, to my mind, can't be explained by anything other than a leak 
(whether that be a "real" leak where the pointers have been discarded 
without freeing the data, or a "pseudo" leak caused by references to 
them being held forever).


The SslBump wiki page (http://wiki.squid-cache.org/Features/SslBump) 
says that the SSL context used for talking to servers is wiped on 
reconfigure, and from what I've seen in the code it looks like this 
should still be true.  However, a reconfigure doesn't seem to help in 
this case, so my assumption is that this data is not part of that SSL 
context.  I'm not sure where else all of this data could be from though.


As much of the data seem to be intermediate and root CA certificates, it 
is presumably being collected from web servers, rather than being 
generated locally.  Of the 131K strings not containing "signTrusted", 
only 2760 of them appear to be server certificates (86 unique), so it 
seems to me that the rest of the data are probably the intermediate 
certificate chains from web servers that Squid has connected to.


It looks like there were also over 400K bumped requests split across 2 
workers, so although 131K certificates is a massive amount of "leaked" 
data, I don't think we are leaking on every connection.  Coupled with 
the fact that I can't seem to reproduce this in a test environment, 
suggests that there is something a little abnormal going on to trigger 
the leak.  Also bear in mind that a single certificate will show up as 2 
separate strings, since it has both a subject and an issuer, so we're 
probably actually talking about around 65K certificates.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube "challenges"

2016-02-25 Thread Steve Hill

On 25/02/16 02:33, Eliezer Croitoru wrote:


I have not reviewed every product but I have tried couple and these I
have tested do not have a really good system that filters YouTube videos
the way I would have imagined.
I have not tested your product... and I was wondering if the next URL
will be filtered by you software in some way?

https://www.youtubeeducation.com/embed/KdS6HFQ_LUc


That URL tells me "This video is unavailable with the Education Filter 
enabled.  To view this video, the site network administrator will need 
to add it to a playlist".


Correct me if I'm wrong, but isn't that Google's old YouTube for 
Education system which they no longer support?



I have seen couple pretty really amazing filtering ideas but each has
it's own limits. For example it is possible to analyze every in-transit
image and video+audio and categorize them which actually is a great
solution for many but the Achilles heel is there always.
Some filters has higher false positive rates while others has less but
leaves the user in abyss after reading a weird faked ransom malware JS
page.


Our filters have both a URI categorisation database, and a content 
analysis engine.  The content analysis engine does text analysis though, 
not video analysis.  You can still get some useful categorisation out of 
the descriptions on YouTube videos though.


Google Apps integrates with YouTube restricted mode to allow school 
staff to whitelist videos that would otherwise be disallowed, so a lot 
of schools use that.  That's Google's replacement for YouTube for 
Education, which they officially stopped supporting last year, but in 
reality it has been dead for a couple of years.


In this case, my understanding was that Darren wanted to blacklist 
YouTube, but still allow the embedded youtube videos to play on his 
local page.  With one of our filters that would be easy - it requires no 
content analysis since he wants to block the whole of YouTube.  He'd 
then just create an override for the local web page telling the system 
to allow the videos that are embedded in that page.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube "challenges"

2016-02-25 Thread Steve Hill

On 25/02/16 03:52, Darren wrote:


The user visits a page on my server with the YouTube links. Visiting
this page triggers a state based ACL (something like the captive portal
login).

The user then clicks a YouTube link and squid checks this ACL to see if
the user is originating the request from my local page and if it is,
allows the splice to YouTube and the video can play.


Squid can't tell that the requests were referred by your page - the 
iframe itself may have your page as the referrer (although that 
certainly isn't guaranteed), but the objects that are referred within 
that iframe won't have a useful referrer string.


You could dynamically create an ACL that allows the whole of youtube 
when the user has your page open, but that is fairly insecure since they 
could just open the page and then they would be allowed to access 
anything through youtube.


In my experience (and this is what we do), to be at all secure you have 
to analyse the page itself in order to figure out which specific URIs to 
whitelist (or at least, have those URIs hard-coded somewhere else).


Either way, YouTube uses https, so unless you're going to blindly allow 
the whole of youtube whenever a user visits your page, you're going to 
need to ssl bump the requests in order to have an ACL based on the 
referrer and path.  And as you know, ssl bumping involves sticking a 
certificate on each device.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] host_verify_strict and wildcard SNI

2016-07-06 Thread Steve Hill


I'm using a transparent proxy and SSL-peek and have hit a problem with 
an iOS app which seems to be doing broken things with the SNI.


The app is making an HTTPS connection to a server and presenting an SNI 
with a wildcard in it - i.e. "*.example.com".  I'm not sure if this 
behaviour is actually illegal, but it certainly doesn't seem to make a 
lot of sense to me.


Squid then internally generates a "CONNECT *.example.com:443" request 
based on the peeked SNI, which is picked up by hostHeaderIpVerify(). 
Since *.example.com isn't a valid DNS name, Squid rejects the connection 
on the basis that *.example.com doesn't match the IP address that the 
client is connecting to.


Unfortunately, I can't see any way of working around the problem - 
"host_verify_strict" is disabled, but according to the docs,
"For now suspicious intercepted CONNECT requests are always responded to 
with an HTTP 409 (Conflict) error page."


As I understand it, turning host_verify_strict on causes problems with 
CDNs which use DNS tricks for load balancing, so I'm not sure I 
understand the rationale behind preventing it from being turned off for 
CONNECT requests?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Skype, SSL bump and go.trouter.io

2016-07-06 Thread Steve Hill


I've been finding some problems with Skype when combined with TProxy and 
HTTPS interception and wondered if anyone had seen this before:


Skype works so long as HTTPS interception is not performed and traffic 
to TCP and UDP ports 1024-65535 is allowed directly out to the internet. 
 Enabling SSL-bump seems to break things - When making a call, Skype 
makes an SSL connection to go.trouter.io, which Squid successfully 
bumps.  Skype then makes a GET request to 
https://go.trouter.io/v3/c?auth=true&timeout=55 over the SSL connection, 
but the HTTPS server responds with a "400 Bad Request" error and Skype 
fails to work.


The Skype client clearly isn't rejecting the intercepted connection 
since it is making HTTPS requests over it, but I can't see why the 
server would be returning an error.  Obviously I can't see what's going 
on inside the connection when it isn't being bumped, but it does work 
then.  The only thing I can think is maybe the server is examining the 
SSL handshake and returning an error because it knows it isn't talking 
directly to the Skype client - but that seems like an odd way of doing 
things, rather than rejecting the SSL handshake in the first place.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Steve Hill

On 06/07/16 20:44, Eliezer Croitoru wrote:


There are couple options to the issue and a bad request can happen if
squid transforms or modifies the request. Did you tried to use basic
debug sections output to verify if you are able to "replicate" the
request using a tiny script or curl? I think that section 11 is the
right one to start with
(http://wiki.squid-cache.org/KnowledgeBase/DebugSections) There were
couple issues with intercepted https connections in the past but a
400 means that something is bad and mainly in the expected input and
not a certificate but it is possible that other reasons are there. I
have not tried to use skype in a transparent environment for a very
long time but I can try to test it later.


I tcpdumped the icap REQMOD session to retrieve the request and tried it
manually (direct to the Skype server) with openssl s_client.  The Skype
server (not Squid) returned a 400.  But of course, the Skype request
contains various data that the server will probably (correctly) see as a
replay attack, so it isn't a very good test - all I can really say is
that the real Skype client was getting exactly the same error from the
server when the connection is bumped, but works fine when it is tunnelled.

Annoyingly, Skype doesn't include an SNI in the handshake, so peeking in
order to exclude it from being bumped isn't an option.

The odd thing is that I have had Skype working in a transparent 
environment previously (with the unprivalidged ports unfirewalled), so I 
wonder if this is something new from Microsoft.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Skype, SSL bump and go.trouter.io

2016-07-07 Thread Steve Hill

On 07/07/16 11:07, Eliezer Croitoru wrote:


Can you verify please using a debug 11,9 that squid is not altering the request 
in any form?
Such as mentioned at: http://bugs.squid-cache.org/show_bug.cgi?id=4253


Thanks for this.  I've compared the headers and the original contains:
Upgrade: websocket
Connection: Upgrade

Unfortunately, since Squid doesn't support websockets I think there's no 
way around this - by the time we see the request and can identify it as 
Skype we've already bumped it so we're committed to pass it through 
Squid's HTTP engine.  :(


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Steve Hill

On 06/07/16 20:54, Eliezer Croitoru wrote:


There are other options of course but the first thing to check is if the client 
is a real browser or some special creature that tries it's luck with a special 
form of ssl.


In this case it isn't a real web browser - it's an iOS app, and the 
vendor has stated that they have no intention of fixing it :(


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-07 Thread Steve Hill

On 07/07/16 02:07, Alex Rousskov wrote:


Q1. Is wildcard SNI "legal/valid"?

I do not know the answer to that question. The "*.example.com" name is
certainly legal in many DNS contexts. RFC 6066 requires HostName SNI to
be a "fully qualified domain name", but I failed to find a strict-enough
RFC definition of an FQDN that would either accept or reject wildcards
as FQDNs. I would not be surprised if FQDN syntax is not defined to the
level that would allow one to reject wildcards as FQDNs based on syntax
alone.


Wildcards can be specified in DNS zonefiles, but I don't think you can 
ever look them up directly (rather, you look up "something.example.com" 
and the DNS server itself decides to use the wildcard record to fulfil 
that request - you never look up *.example.com itself).



Q2. Can wildcard SNI "make sense" in some cases?

Yes, of course. The client essentially says "I am trying to connect to
_any_ example.com subdomain at this IP:port address. If you have any
service like that, please connect me". That would work fine in
deployment contexts where several servers with different names provide
essentially the same service and the central "routing point" would pick
the "best" service to use. I am not saying it is a good idea to use
wildcard SNIs, but I can see them "making sense" in some cases.


Realistically, shouldn't the SNI reflect the DNS request that was made 
to find the IP of the server you're connecting to?  You would never make 
a DNS request for '*.example.com' so I don't see a reason why you would 
send an SNI that has a larger scope than the DNS request you made.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] host_verify_strict and wildcard SNI

2016-07-11 Thread Steve Hill

On 07/07/16 12:30, Marcus Kool wrote:


Here things get complicated.
It is correct that Squid enforces apps to follow standards or
should Squid try to proxy connections for apps when it can?


I would say no: where it is possible for Squid to allow an app to work, 
even where it isn't following standards (without compromising security / 
other software / etc.) then Squid needs to try to make the app work.


Unfortunately, end users do not understand the complexities, and if an 
app works on their home internet connection and doesn't work through 
their school / office connection (which is router through Squid) then as 
far as they are concerned the school / office connection is "broken", 
even if the problem is actually a broken app.


This is made worse by (1) the perception that big businesses such as 
Microsoft / Apple / Google can never be wrong (even though this is not 
born our by experience of their software), and (2) the fact that app 
developers rarely seem at all interested in acknowledging/fixing such 
bugs (in my experience).


So in the end you have a choice: live with people accusing Squid of 
being "broken" and refuse to allow applications that will never be fixed 
to work, or work around the broken apps within Squid and therefore get 
them working without the cooperation of the app developers.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-11 Thread Steve Hill


I've been suffering from a significant memory leak on multiple servers 
running Squid 3.5 for months, but was unable to reproduce it in a test 
environment.  I've now figured out how to reproduce it and have done 
some investigation:


When using TPROXY, Squid generates fake "CONNECT 192.0.2.1:443" 
requests, using the IP address that the client connected to.  At 
ssl_bump step 1, we peek and Squid generates another fake "CONNECT 
example.com:443" request containing the SNI from the client's SSL handshake.


At ssl_bump step 2 we splice the connection and Squid does verification 
to make sure that example.com does actually resolve to 192.0.2.1.  If it 
doesn't, Squid is supposed to reject the connection in 
ClientRequestContext::hostHeaderVerifyFailed() to prevent clients from 
manipulating the SNI to bypass ACLs.


Unfortunately, when verification fails, rather than actually dropping 
the client's connection, Squid just leaves the client hanging. 
Eventually the client (hopefully) times out and drops the connection 
itself, but the associated ClientRequestContext is never destroyed.


This is testable by repeatedly executing:
openssl s_client -connect 17.252.76.30:443 -servername 
courier.push.apple.com


That is a traffic pattern that we see in the real world and is now 
clearly what is triggering the leak: Apple devices make connections to 
addresses within the 17.0.0.0/8 network with an SNI of 
"courier.push.apple.com".  courier.push.apple.com resolves to a CNAME 
pointing to courier-push-apple.com.akadns.net, but 
courier-push-apple.com.akadns.net doesn't exist.  Since Squid can't 
verify the connection, it won't allow it and after 30 seconds the client 
times out.  Each Apple device keeps retrying the connection, leaking a 
ClientRequestContext each time, and before long we've leaked several 
gigabytes of memory (on some networks I'm seeing 16GB or more of leaked 
RAM over 24 hours!).


Unfortunately I'm a bit lost in the Squid code and can't quite figure 
out how to gracefully terminate the connection and destroy the context.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-12 Thread Steve Hill



This sounds very similar to Squid bug 4508. Factory proposed a fix
for that bug, but the patch is for Squid v4. You may be able to adapt it
to v3. Testing (with any version) is very welcomed, of course:


Thanks for that - I'll look into adapting and testing it.

(been chasing this bug off and on for months - hadn't spotted that there 
was a bug report open for it :)



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Checking SSL bump status in http_access

2016-08-16 Thread Steve Hill


Is there a way of figuring out if the current request is a bumped 
request when the http_access ACL is being checked?  i.e. can we tell the 
difference between a GET request that is inside a bumped tunnel, and an 
unencrypted GET request?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Sales / enquiries:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Large memory leak with ssl_peek (now partly understood)

2016-08-17 Thread Steve Hill

On 17/08/16 06:22, Dan Charlesworth wrote:


Deployed a 3.5.20 build with both of those patches and have noticed a big 
improvement in memory consumption of squid processes at a couple of 
splice-heavy sites.

Thank you, sir!


We've now started tentatively rolling this out to a few production sites 
too and are seeing good results so far.



--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Checking SSL bump status in http_access

2016-08-18 Thread Steve Hill

On 17/08/16 17:18, Alex Rousskov wrote:


This configuration problem should be at least partially addressed by the
upcoming annotate_transaction ACLs inserted into ssl_bump rules:
http://lists.squid-cache.org/pipermail/squid-dev/2016-July/006146.html


That looks good.  When implementing this, beware the note in comment 3 
of bug 4340: http://bugs.squid-cache.org/show_bug.cgi?id=4340#c3
"for transparent connections, the NotePairs instance used during the 
step-1 ssl_bump ACL is not the same as the instance used during the 
http_access ACL, but for non-transparent connections they are the same 
instance.  The upshot is that any notes set by an external ACL when 
processing the ssl_bump ACL during step 1 are discarded when handling 
transparent connections."  - It would greatly reduce the functionality 
of your proposed ACLs if the annotations were sometimes discarded part 
way through a connection or request.


Something I've been wanting to do for a while is attach a unique 
"connection ID" and "request ID" to requests so that:
1. An ICAP server can make decisions about the connection (e.g. how to 
authenticate, whether to bump, etc.) and then refer back to the data it 
knows/generated about the connection when it processes the requests 
contained within that connection.
2. When multiple ICAP requests will be generated, they can be linked 
together by the ICAP server - e.g. where a single request will generate 
a REQMOD followed by a RESPMOD it would be good for the ICAP server to 
know which REQMOD and RESPMOD relate to the same request.


It sounds like your annotations plan may address this to some extent. 
(We can probably already do some of this by having the ICAP server 
generate unique IDs and store them in ICAP headers to be passed along 
with the request, but I think the bug mentioned above would cause those 
headers to be discarded mid-request in some cases)


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Checking SSL bump status in http_access

2016-08-18 Thread Steve Hill

On 17/08/16 00:12, Amos Jeffries wrote:


Is there a way of figuring out if the current request is a bumped
request when the http_access ACL is being checked?  i.e. can we tell the
difference between a GET request that is inside a bumped tunnel, and an
unencrypted GET request?


In Squid-3 a combo of the myportname and proto ACLs should do that.


I think when using a nontransparent proxy you can't tell the difference 
between:


1. HTTPS requests inside a bumped CONNECT tunnel, and
2. unencrypted "GET https://example.com/ HTTP/1.1" requests made 
directly to the proxy.



--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-08-18 Thread Steve Hill

On 17/08/16 11:50, FredB wrote:


I tried rock store and smp long time ago (squid 3.2 I guess), Unfortunately I 
definitely drop smp because there are some limitations (In my case), and I 
fall-back to diskd because there were many bugs with rock store. FI I also 
switched to aufs without big differences.

But now with the latest 3.5.20 ? Sadly SMP still not for me but rock store ?

There is someone who are using rock store with a high load, more than 800 r/s, 
without any problem ? There is a real difference in this situation, cpu, speed, 
memory ?


We use SMP and Rock under the 3.5 series without problems.  But I don't 
think any of our sites have as high req/sec load as you.


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rock store status

2016-08-19 Thread Steve Hill

On 19/08/16 08:45, FredB wrote:


Please can you describe your load and configurations ?


We supply Squid based online safety systems to schools across the UK, 
utilising Rock store for caching and peek/splice, external ACLs and ICAP 
for access control/filtering/auditing.  Typically I think our biggest 
schools probably top out at around 400,000 requests/hour, but I don't 
have any hard data to hand to back that up at the moment.


The only serious Squid issue we've been tracking recently is the memory 
leak associated with spliced connections, which we've now fixed (and 
submitted patches).  That said, with the schools currently on holiday 
those fixes haven't yet been well tested on real-world servers - we'll 
find out if there are any issues with them when term starts again :)


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] More host header forgery pain with peek/splice

2016-08-25 Thread Steve Hill


This one just seems to keep coming up and I'm wondering how other people 
are dealing with it:


When you peek and splice a transparently proxied connection, the SNI 
goes through the host validation phase.  Squid does a DNS lookup for the 
SNI, and if it doesn't resolve to the IP address that the client is 
connecting to, Squid drops the connection.


When accessing one of the increasingly common websites that use DNS load 
balancing, since the DNS results change on each lookup, Squid and the 
client may not get the same DNS results, so Squid drops perfectly good 
connections.


Most of this problem goes away if you ensure all the clients use the 
same DNS server as squid, but not quite.  Because the TTL on DNS records 
only has a resolution of 1 second, there is a period of up to 1 second 
when the DNS records Squid knows about doesn't match the ones that the 
client knows about.  The client and squid may expire the records up to 1 
second apart.


So what's the solution?  (Notably the validation check can't be disabled 
without hacking the code).


--
 - Steve Hill
   Technical Director
   OpendiumOnline Safety / Web Filteringhttp://www.opendium.com

   Enquiries Support
   - ---
   sa...@opendium.comsupp...@opendium.com
   +44-1792-824568   +44-1792-825748
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] %un format code doesn't work for external ssl_bump ACLs

2015-08-28 Thread Steve Hill


Squid 3.5.7

I'm using an external ACL to decide whether to bump traffic during SSL 
bump step 2.  The external ACL needs to know the user's username for 
requests that have authenticated, but not all requests are authenticated 
so I can't use %LOGIN and I'm therefore using %un instead.  However, %un 
is never being filled in with a user name.



The relevant parts of the config are:

http_access allow proxy_auth
http_access deny all
external_acl_type sslpeek children-max=10 concurrency=100 ttl=0 
negative_ttl=0 %SRC %un %URI %ssl::>sni %>ha{User-Agent} 
/usr/sbin/check_bump.sh

acl sslpeek external sslpeek
acl ssl_bump_step_1 at_step SslBump1
acl ssl_bump_step_2 at_step SslBump2
acl ssl_bump_step_3 at_step SslBump3
ssl_bump peek ssl_bump_step_1 #icap_says_peek
ssl_bump bump ssl_bump_step_2 sslpeek
ssl_bump splice all
sslproxy_cert_error allow all


The debug log shows that the request is successfully authenticated:

Acl.cc(138) matches: checking proxy_auth
UserData.cc(22) match: user is steve, case_insensitive is 0
UserData.cc(28) match: aclMatchUser: user REQUIRED and auth-info present.
Acl.cc(340) cacheMatchAcl: ACL::cacheMatchAcl: miss for 'proxy_auth'. 
Adding result 1

Acl.cc(158) matches: checked: proxy_auth = 1

But then later in the log I see:

external_acl.cc(1416) Start: fg lookup in 'sslpeek' for 
'2a00:1940:1:8:468a:5bff:fe9a:cd7f - www.hsbc.co.uk:443 www.hsbc.co.uk 
Mozilla/5.0%20(X11;%20Fedora;%20Linux%20x86_64;%20rv:39.0)%20Gecko/20100101%20Firefox/39.0'



The user name given to the external ACL is "-" even though the request 
has been authenticated.  Setting a->require_auth in 
parse_externalAclHelper() makes it work, but obviously just makes %un 
behave like %LOGIN, so isn't a solution.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ICAP response header ACL

2015-10-01 Thread Steve Hill


The latest adaption response headers are available through the 
%adapt::headers through an ACL?


The documentation says that adaptation headers are available in the 
notes, but this only appears to be headers set with adaptation_meta, not 
the ICAP response headers.  I had also considered using the "note" 
directive to explicitly stuff the headers into the notes, but it looks 
like the note directive doesn't allow you to use format strings (i.e. 
"note icap_headers %adapt::note to "%adapt::

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Steve Hill
Hey!

 

New message, please read <http://thecontentsplash.com/perhaps.php?nb0k3>

 

Steve Hill

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assert, followed by shm_open() fail.

2015-11-09 Thread Steve Hill


On Squid 3.5.11 I'm seeing occasional asserts:

2015/11/09 13:45:21 kid1| assertion failed: DestinationIp.cc:41: 
"checklist->conn() && checklist->conn()->clientConnection != NULL"


More concerning though, is that usually when a Squid process crashes, it 
is automatically restarted, but following these asserts I'm often seeing:


FATAL: Ipc::Mem::Segment::open failed to 
shm_open(/squidnocache-squidnocache-cf__metadata.shm): (2) No such file 
or directory


After this, Squid is still running, but won't service requests and 
requires a manual restart.


Has anyone seen this before?

Cheers.

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Steve Hill

On 12/11/15 09:04, Eugene M. Zheganin wrote:


I decided to intercept the HTTPS traffic on my production squids from
proxy-unware clients to be able to tell them there's a proxy and they
should configure one.
So I'm doing it like (the process of forwarding using FreeBSD pf is not
shown here):

===Cut===
acl unauthorized proxy_auth stringthatwillnevermatch
acl step1 at_step sslBump1

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump splice all
===Cut===

Almost everything works, except that squid for some reason is generating
certificates in this case for IP addresses, not names, so the browser
shows a warning abount certificate being valid only for IP, and not name.


proxy_auth won't work on intercepted traffic and will therefore always 
return false, so as far as I can see you're always going to peek and 
then splice.  i.e. you're never going to bump, so squid should never be 
generating a forged certificate.


You say that Squid _is_ generating a forged certificate, so something 
else is going on to cause it to do that.  My first guess is that Squid 
is generating some kind of error page due to some http_access rules 
which you haven't listed, and is therefore bumping.


Two possibilities spring to mind for the certificate being for the IP 
address rather than for the name:
1. The browser isn't bothering to include an SNI in the SSL handshake 
(use wireshark to confirm).  In this case, Squid has no way to know what 
name to stick in the cert, so will just use the IP instead.
2. The bumping is happening in step 1 instead of step 2 for some reason. 
 See:  http://bugs.squid-cache.org/show_bug.cgi?id=4327


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid http & https intercept based on DNS server

2015-11-12 Thread Steve Hill

On 12/11/15 12:08, James Lay wrote:


Some applications (I'm thinking mobile apps) may or may not use a
hostname...some may simply connect to an IP address, which makes control
over DNS irrelevant at that point.  Hope that helps.


Also, redirecting all the DNS records to Squid will break everything 
that isn't http/https since there will be nothing on the squid server to 
handle that traffic.


It doesn't sound like a great idea to me - why not just redirect 
http/https traffic at the gateway (TPROXY) instead of mangling DNS?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] kid registration timed out

2016-02-08 Thread Steve Hill
g Least Load store dir selection
03:43:37 kid1| Set Current Directory to /var/spool/squid-nocache
03:43:37 kid1| Finished loading MIME types and icons.
03:43:37 kid1| HTCP Disabled.
03:43:37 kid1| Configuring Parent [::1]/3129/0
03:43:37 kid1| Squid plugin modules loaded: 0
03:43:37 kid1| Adaptation support is on
03:43:38 kid1| storeLateRelease: released 0 objects
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.177 seconds = 0.124 user + 0.053 sys
Maximum Resident Size: 83088 KB
Page faults with physical i/o: 0
Squid Cache (Version 3.5.11): Terminated abnormally.
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.189 seconds = 0.127 user + 0.062 sys
Maximum Resident Size: 83072 KB
Page faults with physical i/o: 0
CPU Usage: 0.191 seconds = 0.130 user + 0.061 sys
Maximum Resident Size: 83072 KB
Page faults with physical i/o: 0
03:43:43 kid1| Closing HTTP port [::]:3128
03:43:43 kid1| Closing HTTP port [::]:8080
03:43:43 kid1| Closing HTTP port [::]:3130
03:43:43 kid1| Closing HTTPS port [::]:3131
03:43:43 kid1| storeDirWriteCleanLogs: Starting...
03:43:43 kid1|   Finished.  Wrote 0 entries.
03:43:43 kid1|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: kid1 registration timed out
Squid Cache (Version 3.5.11): Terminated abnormally.
CPU Usage: 0.193 seconds = 0.137 user + 0.056 sys
Maximum Resident Size: 83104 KB
Page faults with physical i/o: 0


There are actually 4 workers, but I have excluded the log lines for 
"kid[2-9]" as they seem to show exactly the same as kid1.  I can't see 
any indication of why it is blowing up, other than "FATAL: kid1 
registration timed out" (and identical time outs for the other workers). 
 I seem to be left with a Squid process still running (so my monitoring 
doesn't alert me that Squid isn't running), but it doesn't service 
requests.  This isn't too bad if I'm manually restarting squid during 
the day, but if squid gets restarted in the night due to a package 
upgrade I can be left with a dead proxy that requires manual intervention.



The second problem, which may or may not be related, is that if Squid 
crashes (e.g. an assert()), it usually automatically restarts, but some 
times it fails and I see this logged:


FATAL: Ipc::Mem::Segment::open failed to 
shm_open(/squidnocache-cf__metadata.shm): (2) No such file or directory


Similar to the first problem, when this happens I'm still left with a 
squid process running, but it isn't servicing any requests.  I realise 
that it is a bug for Squid to crash in the first place, but it's 
compounded by the occasional complete loss of service when it happens.


Any help would be appreciated.  Thanks. :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL bump memory leak

2016-02-23 Thread Steve Hill


I'm looking into (what appears to be) a memory leak in the Squid 3.5 
series.  I'm testing this in 3.5.13, but this problem has been observed 
in earlier releases too.  Unfortunately I haven't been able to reproduce 
the problem in a test environment yet, so my debugging has been limited 
to what I can do on production systems (so no valgrind, etc).


These systems are configured to do SSL peek/bump/splice and I see the 
Squid workers grow to hundreds or thousands of megabytes in size over a 
few hours.  A configuration reload does not reduce the memory 
consumption.  For debugging purposes, I have set 
"dynamic_cert_mem_cache_size=0KB" to disable the certificate cache, 
which should eliminate bug 4005.  I've taken a core dump to analyse and 
have found:


Running "strings" on the core, I can see that there are vast numbers of 
strings that look like certificate subject/issuer identifiers.  e.g.:
	/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=Secure 
Certificate Services


The vast majority of these seem to refer to root and intermediate 
certificates.  There are a few that include a host name and are probably 
server certificates, such as:

/OU=Domain Control Validated/CN=*.soundcloud.com
But these are very much in the minority.

Also, notably they are mostly duplicates.  Compare the total number:
$ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|wc -l
131599
with the number of unique strings:
$ strings -n 10 -t x core.21693|egrep '^ *[^ ]+ /.{1,3}='|sort -u -k 2|wc -l
658

There are also a very small number of lines that look something like:
	/C=US/ST=California/L=San Francisco/O=Wikimedia Foundation, 
Inc./CN=*.wikipedia.org+Sign=signTrusted+SignHash=SHA256
I think the "+Sign=signTrusted+SignHash=SHA256" part would indicate that 
this is a Squid database key, which is very confusing since with the 
certificate cache disabled I wouldn't expect to see these at all.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-09-29 Thread Steve Hill

On 28.09.14 08:34, Eliezer Croitoru wrote:


Also to minimize the leak source try to share more information about
your usage.
Are you using SMP workers or not?
What is the mem and info options in the cache manager page data?
Did you tried "cache deny all" acl to minimize the source of the leak?


For what its worth, I'm also seeing a big memory leak on multiple 
servers under Squid 3.4.6:


I have two basic Squid configurations - one of them is a plain caching 
proxy and is _not_ leaking.  The other does no caching, but is doing 
ICAP, external ACLs, SSL bumping and TPROXY and leaks like a sieve 
(several gigabytes a day).


I _think_ I have narrowed it down to something ICAP related and I'm 
currently valgrinding.  Unfortunately I can't seem to get the valgrind 
instrumentation to work so I'm having to do without (I compile 
--with-valgrind-debug but "squidclient mgr:mem" doesn't produce a 
valgrind report).  I have noticed that there are a _lot_ of memory 
errors reported by valgrind though (uninitialised memory).


Pretty much all of the leaky memory is showing up as unaccounted:
Memory accounted for:
Total accounted:84306 KB   3%
memPool accounted:  84306 KB   3%
memPool unaccounted:   2917158 KB  97%

I am using SMP workers, but turning that off doesn't fix the issue.

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-09-30 Thread Steve Hill
.4%, 60min: 11.9%
Storage Swap size:  3773012 KB
Storage Swap capacity:  90.0% used, 10.0% free
Storage Mem size:   262144 KB
Storage Mem capacity:   100.0% used,  0.0% free
Mean Object Size:   28.55 KB
Requests given to unlinkd:  3198063
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.02899  0.03241
Cache Misses:  0.03066  0.03241
Cache Hits:0.00405  0.00091
Near Hits: 0.03066  0.03427
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.0
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:1574985.354 seconds
CPU Time:   32733.608 seconds
CPU Usage:  2.08%
CPU Usage, 5 minute avg:3.80%
CPU Usage, 60 minute avg:   3.54%
Maximum Resident Size: 1025200 KB
Page faults with physical i/o: 289968
Memory usage for squid via mallinfo():
Total space in arena:   49616 KB
Ordinary blocks:38418 KB  15268 blks
Small blocks:   0 KB  0 blks
Holding blocks: 10520 KB  7 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   11198 KB
Total in use:   11198 KB 19%
Total free: 11198 KB 19%
Total size: 60136 KB
Memory accounted for:
Total accounted:27128 KB  45%
memPool accounted:  27128 KB  45%
memPool unaccounted:33008 KB  55%
memPoolAlloc calls: 5279809700
memPoolFree calls:  5314670336
File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use: 67
Number of file desc currently in use:   50
Files queued for open:   0
Available number of file descriptors: 16334
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
132586 StoreEntries
   444 StoreEntries with MemObjects
  8192 Hot Object Cache Items
132142 on-disk objects



As a separate note: I'm not sure why the memory footprint of the caching 
squid is so low - with cache_mem set to 256MB (100% used, apparently) 
and 8 workers I would expect it to be much more.  Something else for me 
to investigate when I've got time. :)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-09-30 Thread Steve Hill


On 30.09.14 15:13, Amos Jeffries wrote:


IIRC the valgrind report is strangely at the end of mgr:info rather
than mgr:mem.


In that case the wiki is wrong. :)
Anyway, it doesn't show up on either of them for me so I guess you might 
be right that it isn't "real" leaked memory.  (I still thought I'd see 
some indication that it was invoking the valgrind report though, even if 
it contained no data)



With one small exception all the reports of memory "leaks" in
3.2/3.3/3.4 series are in fact memory over-allocations. The difference
is that over-allocated memory is still referenced by some part of
Squid, so it *does not* show up in a leak finder like valgrind.


Do you have any advice for finding out what memory is still referenced 
in large amounts?  Since this is unaccounted memory, it presumably 
doesn't appear in mgr:mem (at least, I can't see anything obvious in there)?


I'm trying to figure out if there's a way of convincing valgrind to dump 
info about all the currently allocated memory while the program is still 
running - there would be a lot of legitimate stuff in the report, but 
hopefully a few hundred MB of memory that shouldn't be there would stick 
out like a sore thumb.



If you are using a 64-bit OS then the unaccounted numbers there are
bogus. They are based on mallinfo() lookup results which suffer from
32-bit wrap issues. Use the OS command line to get a memory report
from top or similar instead.


In this case I don't believe the data is bogus.  I have 8 workers, which 
top shows as:

26249 squid 20   0  871m 782m 5348 S  0.7 10.0   8:44.81 squid
26245 squid 20   0  732m 644m 5344 S  0.3  8.3   8:00.77 squid
26244 squid 20   0  706m 617m 5348 S  1.0  7.9   7:42.29 squid
26250 squid 20   0  699m 613m 5348 S  3.6  7.9   4:43.49 squid
26246 squid 20   0  699m 612m 5348 S  2.3  7.9   6:12.78 squid
26251 squid 20   0  662m 576m 5348 S  0.7  7.4   5:45.11 squid
26248 squid 20   0  649m 564m 5348 S  2.3  7.2   9:22.91 squid
26247 squid 20   0  603m 518m 5348 S  1.3  6.6   4:45.47 squid

Adding the sizes in the "virt" column gives me 5621MB.  The combined RSS 
is 4926MB.  Process 26250 has just 2MB in swap, the rest have no swap at 
all.  I guess the difference between the RSS and Virt totals can 
probably be almost entirely accounted for by the executables.


mallinfo() says there is about 4897MB in the arena - close enough to sum 
of the RSS that I'm inclined to believe it.  Since no one process 
exceeds 2GB, mallinfo() is probably trustworthy in this case.


The accounted for memory is 440MB - potentially higher than I would 
expect for a Squid with caching disabled, but certainly low enough to 
not be an immediate concern.  But that gives us about 4.4GB of memory 
unaccounted for (by subtracting 440MB from the sum of the RSS as shown 
by top).


4.4GB of unaccounted memory after running for just 6 hours is a 
significant amount, and if left to their own devices Squid continues to 
eat all the memory, leaving the machine thrashing swap.


I count 531 sockets in netstat, so my guess is that it isn't just data 
associated with some open sockets:

netstat -apn|egrep '26249|26245|26244|26250|26246|26251|26248|26247' -c
531


--
 - Steve
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-10-01 Thread Steve Hill

On 29.09.14 10:04, Steve Hill wrote:


I _think_ I have narrowed it down to something ICAP related


Looks like I was wrong - it actually seems to be external ACL related.

I have an external ACL defined as:
external_acl_type preauth cache=0 children-max=1 concurrency=100 ttl=0 
negative_ttl=0 %SRC %>{User-Agent} %URI %METHOD /usr/sbin/squid-preauth


The inclusion of %URI means that it's going to be called a lot, even 
with caching, but in this case I've turned caching off.  As far as I can 
see in the code, if cache=0 or (ttl=0 and negative_ttl=0), it doesn't 
touch the cache at all so my guess is that this isn't a problem with the 
caching code.


I'm testing this with Siege and consistently seeing "Total size" 
increasing by about 51MB and "memPool unaccounted" increasing by about 
14MB after 20,000 requests from a fresh start (so, 2.6K per request and 
0.7K/request respectively).  If I disable the external ACL then I see 
growths of about 10MB and 1MB respectively.


Although this isn't especially consistent with the stats from a 
production system that I tested yesterday, which showed about 
5.5K/request (total) and 5K/request (unaccounted) over 926815 requests.


--
 - Steve
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-10-01 Thread Steve Hill

On 01.10.14 13:19, Michele Bergonzoni wrote:

I have an external ACL defined as:
external_acl_type preauth cache=0 children-max=1 concurrency=100 ttl=0
negative_ttl=0 %SRC %>{User-Agent} %URI %METHOD /usr/sbin/squid-preauth


It is well known that external ACLs with ttl=0 and cache=0 leak RAM: I
had this problem and discovered the reason only recently, but Amos knows
it since 2011:


Hmm, I've changed my ttl, negative_ttl and cache to 1 and Squid nolonger 
seems to grow under test conditions... Unfortunately it looks like its 
still growing on the production system.  I guess my tests aren't 
reproducing the same issue - back to the drawing board. :(




--
--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-844-9791439 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-844-4844916 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-10-06 Thread Steve Hill

On 01.10.14 13:54, Amos Jeffries wrote:


I recently opened a bug about this, that I will update now:

http://bugs.squid-cache.org/show_bug.cgi?id=4088


Thank you for the reminder. I will start work on this next.


I'm afraid the patch you added to that bug report doesn't work for me 
(in fact, it makes things worse).  I've updated the ticket with details.



--
 - Steve
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-10-07 Thread Steve Hill

On 30.09.14 16:13, Amos Jeffries wrote:


I'm trying to figure out if there's a way of convincing valgrind to
dump info about all the currently allocated memory while the
program is still running - there would be a lot of legitimate stuff
in the report, but hopefully a few hundred MB of memory that
shouldn't be there would stick out like a sore thumb.


That would be lovely. If you can find it I'd like to know too :-)


su -s /bin/bash squid
valgrind --vgdb=full --error-limit=no --tool=memcheck --leak-check=full 
--show-reachable=yes --leak-resolution=high --num-callers=40 squid -f 
/etc/squid/squid.conf -N


Do whatever you need to do to trigger a leak, then in another window:
su -s /bin/bash squid
gdb squid

At the gdb prompt:
target remote | vgdb
set logging file /tmp/leaks.txt
set logging on
monitor leak_check full reachable any

This will dump out all the currently allocated memory to the console 
(and to /tmp/leaks.txt).


--
 - Steve
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory in squid 3.4.8 and 3.4.7.

2014-10-08 Thread Steve Hill

On 08.10.14 15:05, Amos Jeffries wrote:


New patch added to bug 4088. Please see if it resolves the
external_acl_type leak.


Seems to fix the problem - thank you!

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] leaking memory In Squid 3.4.6

2014-10-09 Thread Steve Hill
ttps matches the decrypted requests
# inside it when it is bumped.
acl CONNECT method CONNECT
acl https   proto https

acl proxy_auth  proxy_auth REQUIRED
acl tproxy  myportname tproxy
acl tproxy_ssl  myportname tproxy_ssl

# The "you have been blocked" page comes from the web server on 
localhost and
# needs to be excluded from filtering and being forwarded to the 
upstream proxy.

acl dstdomain_localhost dstdomain localhost


##
# Start of http_access access control.
##

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost

# Unauthenticated access to the local server
http_access allow local_ips

http_access allow !tproxy !tproxy_ssl !https preauth
http_access allow !preauth_done preauth_tproxy
http_access allow need_http_auth need_postauth_sync proxy_auth postauth_sync
http_access allow need_http_auth need_postauth_async proxy_auth 
postauth_async

http_access allow need_http_auth proxy_auth

http_access deny preauth_ok show_login_page

http_access deny all


##
# Other services
##

icp_access deny all
htcp_access deny all


##
# SSL bumping - 
http://www.squid-cache.org/mail-archive/squid-dev/201206/0089.html

# When the web filter wants a CONNECT request to be bumped it sets the
# icap_says_bump header on it, which we trap for here.  Transparently
# proxied SSL connections are always bumped.
##

acl icap_says_bump req_header X-SSL-Bump -i Yes
ssl_bump server-first icap_says_bump
ssl_bump server-first tproxy_ssl
sslproxy_cert_error allow all


##
# Listening ports
##

http_port 3128 ssl-bump generate-host-certificates=on 
cert=/etc/pki/tls/certs/squid-sslbump.crt 
key=/etc/pki/tls/private/squid-sslbump.key
http_port 8080 ssl-bump generate-host-certificates=on 
cert=/etc/pki/tls/certs/squid-sslbump.crt 
key=/etc/pki/tls/private/squid-sslbump.key

http_port 3130 tproxy name=tproxy
https_port 3131 ssl-bump generate-host-certificates=on 
cert=/etc/pki/tls/certs/squid-sslbump.crt 
key=/etc/pki/tls/private/squid-sslbump.key tproxy name=tproxy_ssl



##
# Set a Netfilter mark on transparently proxied connections so they can have
# special routing
##

tcp_outgoing_mark 0x2 tproxy
tcp_outgoing_mark 0x2 tproxy_ssl


##
# Since we do no caching in this instance of Squid, we use a second 
instance as
# an upstream caching proxy.  For efficiency reasons we try to send 
uncachable

# traffic directly to the web server rather than via the upstream proxy.
##

cache_peer [::1] parent 3129 0 proxy-only no-query no-digest no-tproxy 
name=caching

cache_peer_access caching deny CONNECT
cache_peer_access caching deny https
cache_peer_access caching deny tproxy_ssl
cache_peer_access caching deny to_localhost
cache_peer_access caching deny dstdomain_localhost
cache_peer_access caching allow all

cache_mem 0
cache deny all
never_direct deny CONNECT
never_direct deny https
never_direct deny tproxy_ssl
never_direct deny to_localhost
never_direct deny dstdomain_localhost
never_direct allow all


##
# Interface with the web filter
##
icap_enable on
icap_service_revival_delay 30
icap_preview_enable on
icap_preview_size 5
icap_send_client_ip on
icap_send_client_username on

icap_service iceni_reqmod_precache reqmod_precache 0 
icap://localhost6:1344/reqmod_precache
icap_service iceni_respmod_postcache respmod_precache 0 
icap://localhost6:1344/respmod_postcache


adaptation_service_set iceni_reqmod_precache iceni_reqmod_precache
adaptation_service_set iceni_respmod_postcache iceni_respmod_postcache

adaptation_access iceni_reqmod_precache deny local_ips
adaptation_access iceni_reqmod_precache deny to_localhost
adaptation_access iceni_reqmod_precache deny dstdomain_localhost
adaptation_access iceni_reqmod_precache allow all

adaptation_access iceni_respmod_postcache deny local_ips
adaptation_access iceni_respmod_postcache deny to_localhost
adaptation_access iceni_respmod_postcache deny dstdomain_localhost
adaptation_access iceni_respmod_postcache allow all



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL bump , high memory usage

2014-10-10 Thread Steve Hill

I think I've identified the bulk of the memory "leak" I've been tracking
down for the past few days.  As it turns out, it doesn't seem to be a
leak, but a problem with the SSL certificate caching.

The certificate cache is set by dynamic_cert_mem_cache_size and defaults
to 4MB.  Squid assumes an SSL context is 1KB, so we can cache up to 4096
certificates:

/// TODO: Replace on real size.
#define SSL_CTX_SIZE 1024

Unfortunately the assumed size isn't even close - it looks like an SSL
context usually weighs in at about 900KB!  So the default limit means
the cache can actually grow to around 3.6GB.  To make matters worse,
each worker gets its own cache, so in an 8-way SMP configuration, the
default "4MB" cache size limit actually ends up as around 30GB.


Possible fixes:
1. Stick with a static SSL_CTX_SIZE but use a more accurate estimate (I
suggest 1MB / context, based on my observations).
2. Calculate the context sizes correctly.
3. In the config, specify the maximum number of certificates to be
cached, rather than their total size.


In any case, as it stands the defaults are pretty much guaranteed to
cause a server to run out of memory.

How "heavy weight" is SSL context generation anyway?  i.e. if we expect
to generate 20 certificates a minute (which is what I see on a
production system in the first 5 minutes after startup), is that going
to place a significant load on the system, or is that ok?

the 900KB context size that I've observed seems pretty big to me.  Does
it sound reasonable, or is there some extra data being cached along with
the context that could have been freed earlier?


The instrumentation isn't good enough to be able to spot where this
memory usage originates without extensive debugging.  The memory isn't
allocated to memory pools, so mgr:mem doesn't show it, and
mgr:cached_ssl_cert relies on the incorrect estimate, so appears small
and inconsequential.  The processes can clearly be seen to grow rapidly,
but it all just shows up as unaccounted memory in mgr:info.  Secondly,
mgr:cached_ssl_cert doesn't appear to work with SMP workers.


My Squid process sizes are still larger than I would expect, so I'll
need to do more investigation, but reducing dynamic_cert_mem_cache_size
has stopped the rapid unbounded growth I have been seeing.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL bump fails accessing .gov.uk servers

2014-10-31 Thread Steve Hill

This is probably not a problem with Squid, but I'm posting here in the
hope that someone may have more clue than me when it comes to SSL :)

When accessing https://www.taxdisc.service.gov.uk/ through an SSL
bumping squid, I get:

-
The following error was encountered while trying to retrieve the URL:
https://www.taxdisc.service.gov.uk/*

Failed to establish a secure connection to 62.25.101.198

The system returned:

(71) Protocol error (TLS code: SQUID_ERR_SSL_HANDSHAKE)

Handshake with SSL server failed: [No Error]
-


Trying to connect with openssl directly also fails:

[steve@atlantis ~]$ openssl s_client -connect 62.25.101.198:443 -showcerts
CONNECTED(0003)
140259944179584:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake
failure:s23_lib.c:177:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 249 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---


If I force openssl into TLS1 mode (with the -tls1 argument) then it
works fine.  TLS 1.1 and 1.2 both fail.  However, shouldn't openssl be
negotiating the highest TLS version supported by both server and client?

It works correctly when FireFox connects directly to the web server
rather than going through the proxy.

So my question is: is the web server broken, or am I misunderstanding
something?

Many thanks.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] RFC2616 headers in bumped requests

2014-11-04 Thread Steve Hill

Squid (correctly) inserts Via and X-Forwarded-For headers into requests
that it is proxying.  However, in the case of encrypted traffic, the
server and client are expecting the traffic to reach the other end
as-is, since usually this could not be intercepted.  With SSL bumped
requests this is no longer true - the proxy can (and does) modify the
traffic, by inserting these headers.

So I'm asking the question: is this behavior considered desirable, or
should we be attempting to modify the request as little as possible for
compatibility reasons?

I've just come across a web server that throws its toys out of the pram
when it sees a Via header in an HTTPS request, and unfortunately it's
quite a big one - Yahoo.  See this request:

-
GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1
Host: uk.finance.yahoo.com
Via: 1.1

HTTP/1.1 301 Moved Permanently
Date: Tue, 04 Nov 2014 09:55:40 GMT
Via: http/1.1 yts212.global.media.ir2.yahoo.com (ApacheTrafficServer [c
s f ]), http/1.1 r04.ycpi.ams.yahoo.net (ApacheTrafficServer [cMsSfW])
Server: ATS
Strict-Transport-Security: max-age=172800
Location:
https://uk.finance.yahoo.com/news/degrees-lead-best-paid-careers-141513989.html
Content-Length: 0
Age: 0
Connection: keep-alive
-

Compare to:

-
GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1
Host: uk.finance.yahoo.com

HTTP/1.1 200 OK
...
-


Note that the 301 that they return when a Via header is present just
points back at the same URI, so the client never gets the object it
requested.

For now I have worked around it with:
  request_header_access Via deny https
  request_header_access X-Forwarded-For deny https
But it does make me wonder if inserting the headers into bumped traffic
is a sensible thing to do.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL bump fails accessing .gov.uk servers

2014-11-04 Thread Steve Hill
On 31/10/14 20:03, Dieter Bloms wrote:

> but when the server is broken, it will not work.
> Have a look at:
> 
> https://www.ssllabs.com/ssltest/analyze.html?d=www.taxdisc.service.gov.uk
> 
>> It works correctly when FireFox connects directly to the web server
>> rather than going through the proxy.
> 
> yes the browsers have a workaround and try with different cipher suites,
> when the first connect fails.
> 
>> So my question is: is the web server broken, or am I misunderstanding
>> something?
> 
> The webserver is broken.

Many thanks for this - I have emailed them, which I fully expect them to
ignore  :)

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RFC2616 headers in bumped requests

2014-11-17 Thread Steve Hill
On 04/11/14 13:59, Amos Jeffries wrote:

>> I've just come across a web server that throws its toys out of the
>> pram when it sees a Via header in an HTTPS request, and
>> unfortunately it's quite a big one - Yahoo.  See this request:
> 
>> - GET /news/degrees-lead-best-paid-careers-141513989.html
>> HTTP/1.1 Host: uk.finance.yahoo.com Via: 1.1
> 
> That is unfortunately an invalid HTTP Via header. It is mandatory to
> contain the host field even if it contains a host alias for the real
> FQDN. If that is what is actually being transfered the server is right
> in complaining.

It looks like I copied and pasted this wrong in my original email, I
have just retested and squid sends:
  Via: 1.1 iceni2.opendium.net (squid/3.4.9)

>> For now I have worked around it with: request_header_access Via
>> deny https request_header_access X-Forwarded-For deny https But it
>> does make me wonder if inserting the headers into bumped traffic is
>> a sensible thing to do.
> 
> If you can please chek that Via header being emitted by your Squid
> when things break. And also whether your Squid is contacting their
> server on an HTTPS or HTTP port.
>  If your Squid is contacting their HTTP port for un-encrypted traffic
> this redirect is competely expected.

This is definitely occurring when contacting the server on HTTPS with a
valid Via header:

$ openssl s_client -connect uk.finance.yahoo.com:443 -servername
uk.finance.yahoo.com
CONNECTED(0003)
depth=3 C = US, O = "VeriSign, Inc.", OU = Class 3 Public Primary
Certification Authority
verify return:1
depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU =
"(c) 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class
3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU =
Terms of use at https://www.verisign.com/rpa (c)10, CN = VeriSign Class
3 Secure Server CA - G3
verify return:1
depth=0 C = US, ST = California, L = Sunnyvale, O = Yahoo Inc., CN =
www.yahoo.com
verify return:1
---
Certificate chain
 0 s:/C=US/ST=California/L=Sunnyvale/O=Yahoo Inc./CN=www.yahoo.com
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at
https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at
https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006
VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public
Primary Certification Authority - G5
   i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification
Authority
---
[certificate removed]
---
GET /news/degrees-lead-best-paid-careers-141513989.html HTTP/1.1
Host: uk.finance.yahoo.com
Via: 1.1 iceni2.opendium.net (squid/3.4.9)

HTTP/1.1 301 Moved Permanently
Date: Mon, 17 Nov 2014 10:20:57 GMT
Via: http/1.1 yts272.global.media.ir2.yahoo.com (ApacheTrafficServer [c
s f ]), http/1.1 r15.ycpi.dee.yahoo.net (ApacheTrafficServer [cMsSfW])
Server: ATS
Strict-Transport-Security: max-age=172800
Location:
https://uk.finance.yahoo.com/news/degrees-lead-best-paid-careers-141513989.html
Content-Length: 0
Age: 0
Connection: keep-alive

-- 

 - Steve

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assertion failure: DestinationIp.cc:60

2014-11-18 Thread Steve Hill
I'm seeing a lot of this in both 3.4.6 and 3.4.9:

2014/11/18 15:08:48 kid1| assertion failed: DestinationIp.cc:60:
"checklist->conn() && checklist->conn()->clientConnection != NULL"

I've looked through Bugzilla and couldn't see anything regarding this -
is this a known bug?

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RFC2616 headers in bumped requests

2014-11-20 Thread Steve Hill
On 17/11/14 22:05, Amos Jeffries wrote:

> Would you mind running an experiment for me?
> 
> To see what happens if Squid delivers either of these Via headers
> instead of its current output:
> 
>   Via: HTTPS/1.1 iceni2.opendium.net (squid/3.4.9)

The HTTPS/1.1 one appears to work correctly.

>   Via: TLS/1.2 iceni2.opendium.net (squid/3.4.9)

The web server produces the same broken redirect as before when I send
TLS/1.2.

> Setting it with request_header_access/replace should do.

I've tested this in Squid with request_header_access/replace and
confirmed with openssl's s_client directly.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assertion failure: DestinationIp.cc:60

2014-11-20 Thread Steve Hill
ect deny https
never_direct deny tproxy_ssl
never_direct deny to_localhost
never_direct deny dstdomain_localhost
never_direct allow all

icap_enable on
icap_service_revival_delay 30
icap_preview_enable on
icap_preview_size 5
icap_send_client_ip on
icap_send_client_username on

icap_service iceni_reqmod_precache reqmod_precache 0
icap://localhost6:1344/reqmod_precache
icap_service iceni_respmod_postcache respmod_precache 0
icap://localhost6:1344/respmod_postcache

adaptation_service_set iceni_reqmod_precache iceni_reqmod_precache
adaptation_service_set iceni_respmod_postcache iceni_respmod_postcache

adaptation_access iceni_reqmod_precache deny local_ips
adaptation_access iceni_reqmod_precache deny to_localhost
adaptation_access iceni_reqmod_precache deny dstdomain_localhost
adaptation_access iceni_reqmod_precache allow all

adaptation_access iceni_respmod_postcache deny local_ips
adaptation_access iceni_respmod_postcache deny to_localhost
adaptation_access iceni_respmod_postcache deny dstdomain_localhost
adaptation_access iceni_respmod_postcache allow all

-- 

 - Steve

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RFC2616 headers in bumped requests

2014-12-02 Thread Steve Hill

On 30.11.14 07:40, Amos Jeffries wrote:


Just to followup, there will not be a permanent change made to Squid
because:


That's fair enough - my initial diagnosis was basically "the server's 
broken", but I thought it was worth having a debate about the merits of 
modifying the contents of a bumped connection, in light of the fact that 
at least one such broken server exists and that it probably isn't 
completely unreasonable for the endpoints to assume they have a 
transparent tunnel.


(Another perennial problem that comes from the assumption of a 
transparent tunnel is clients making websockets connections over https, 
which of course won't work through a bumped connection since Squid 
doesn't support HTTP upgrade requests)


Many thanks.

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Debugging slow access

2014-12-10 Thread Steve Hill


I'm looking for advice on figuring out what is causing intermittent high 
CPU usage.


I'm seeing this on multiple servers - most of the time everything is 
fine and I see the Squid workers using maybe 20% CPU each, but every so 
often all the workers sit at the top of the process list in "top", using 
> 97% CPU each and users report very sluggish web access.


Using squidclient during "sluggish" periods is also very slow, with 
Squid taking several seconds to respond to the http requests.  The 
number of requests being handled by squid during the slow periods isn't 
especially high (maybe ~20 / second) and is certainly lower than the 
number of requests at other times - probably because it is taking so 
long to answer requests, but this seems to indicate that it isn't simply 
overloaded and having to deal with too many requests at once.


The during the "slow" periods, squid's servicing of requests seems very 
bursty in nature - I see a whole bunch of requests over a few hundred 
milliseconds and then nothing for maybe half a second.  There are no log 
entries that seem to coincide with these problems.


If I firewall off the clients, the load drops back to zero, so it seems 
this is something a client is doing that is causing Squid to expend a 
huge amount of CPU handling the request, rather than Squid getting stuck 
in a loop or similar.


Restarting squid seems to temporarily fix the problem, but it invariably 
comes back again at some point.


Notably the median service time go up:
HTTP Requests (All):   0.30178  0.40454
Cache Misses:  0.70906  0.65348
Cache Hits:0.0  0.0
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.02893  0.03092
ICP Queries:   0.0  0.0

UP Time:11657.399 seconds
CPU Time:   8843.268 seconds
CPU Usage:  111.23%
CPU Usage, 5 minute avg:144.81%
CPU Usage, 60 minute avg:   153.58%
Maximum Resident Size: 2937536 KB
Page faults with physical i/o: 3


Compared to (recently restarted):
HTTP Requests (All):   0.09477  0.09477
Cache Misses:  0.11465  0.11465
Cache Hits:0.0  0.0
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.00953  0.00953
ICP Queries:   0.0  0.0

UP Time:293.336 seconds
CPU Time:   127.775 seconds
CPU Usage:  43.56%
CPU Usage, 5 minute avg:47.40%
CPU Usage, 60 minute avg:   47.40%
Maximum Resident Size: 799808 KB
Page faults with physical i/o: 0


Is there any advice on how to track down what the problem is?

This Squid is doing:
 - No caching
 - ICAP
 - External ACLs
 - Auth (Negotiate and Basic)
 - SSL bump
 - Both TPROXY and non-transparent (majority of the traffic is 
non-transparent)

 - Uses an upstream proxy for most HTTP (not HTTPS)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-05 Thread Steve Hill

On 10.12.14 17:09, Amos Jeffries wrote:


I'm looking for advice on figuring out what is causing intermittent
high CPU usage.


It appears that the connections gradually gain more and more notes with 
the key "token" (and values containing Kerberos tokens).  I haven't been 
able to reproduce the problem reliably enough to determine if this is 
the root of the high CPU usage problem, but it certainly doesn't look right:


When an ACL is executed that requires the login name (e.g. the 
proxy_auth ACL, or an external ACL using the %LOGIN format specifier), 
Acl.cc:AuthenticateAcl() is called.  This, in turn, calls
UserRequest.cc:tryToAuthenticateAndSetAuthUser(), which calls 
UserRequest.cc:authTryGetUser().  Here we get a call to 
Notes.cc:appendNewOnly() which appends all the notes from 
checklist->auth_user_request->user()->notes.


I can see the appendNewOnly() call sometimes ends up appending a large 
number of "token" notes (I've observed requests with a couple of hundred 
"token" notes attached to them) - the number of notes increases each 
time a Kerberos authentication is performed.  My suspicion is that this 
growth is unbounded and in some cases the number of notes could become 
large enough to be a significant performance hit.


A couple of questions spring to mind:

1. HelperReply.cc:parse() calls notes.add("token",authToken.content()) 
(i.e. it adds a token rather than replacing an existing one).  As far as 
I can tell, Squid only ever uses the first "token" note, so maybe we 
should be removing the old notes when we add a new one?


[Actually, on closer inspection, NotePairs::add() appends to the end of 
the list but NotePairs::findFirst() finds the note closest to the start 
of the list.  Unless I'm missing something, this means the newer "token" 
notes are added but never used?]


2. I'm not sure on how the ACL checklists and User objects are shared 
between connections/requests and how they are supposed to persist.  It 
seems to me that there is something wrong with the sharing/persistence 
if we're accumulating so many "token" notes.  As well as the performance 
problems, there could be some race conditions lurking here?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-05 Thread Steve Hill

On 05.01.15 16:35, Eliezer Croitoru wrote:


Can you share the "squid -v" output and the OS you are using?


Scientific Linux 6.6, see below for the squid -v output.

I've now more or less confirmed that this is the cause of my performance 
problems - every so often I see Squid using all the CPU whilst servicing 
very few requests.  Most of the CPU time is being used by the 
appendNewOnly() function.  For example, 228 milliseconds for 
appendNewOnly() to process a request with 2687 "token" notes attached to 
it, and this can happen more than once per request.



Squid Cache: Version 3.4.10
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' 
'--enable-auth-ntlm-helpers=smb_lm,no_check,fakeauth' 
'--enable-auth-digest-helpers=password,ldap,eDirectory' 
'--enable-auth-negotiate-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=file_userip,LDAP_group,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs,rock' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--with-aio' 
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl' 
'--with-openssl' '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-fPIE -Os -g -pipe 
-fsigned-char -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 
'LDFLAGS=-pie' 'CXXFLAGS=-fPIE -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
--enable-ltdl-convenience



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 05.01.15 18:15, Amos Jeffries wrote:


Can you try making the constructor at the top of src/HelperReply.cc
look like this and see if it resolves the problem?

HelperReply::HelperReply(char *buf, size_t len) :
 result(HelperReply::Unknown),
 notes(),
 whichServer(NULL)
{
 assert(notes.empty());
 parse(buf,len);
}


This didn't help I'm afraid.

Some further debugging so far today:
The notes in HelperReply are indeed empty when the token is added.

However, Auth::Negotiate::UserRequest::HandleReply() appends the reply 
notes to auth_user_request.  It fetches a cached user record from 
proxy_auth_username_cache and then calls absorb() to merge 
auth_user_request with the cached user record.  This ends up adding the 
new Negotiate token into the cached record.  This keeps happening for 
each new request and the cached user record gradually accumulates tokens.


As far as I can see, tokens are only ever read from the helper's reply 
notes, not the user's notes, so maybe the tokens never need to be 
appended to auth_user_request in the first place?


Alternatively, A->absorb(B) could be altered to remove any notes from A 
that have the same keys as B's notes, before using appendNewOnly() to 
merge them?


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 05.01.15 20:11, Eliezer Croitoru wrote:


Did you had the chance to take look at bug 3997:
http://bugs.squid-cache.org/show_bug.cgi?id=3997


This could quite likely be the same issue.  See my other post this 
morning for details, but I've pretty much tracked this down to the 
Negotiate tokens being appended to user cache records in an unbounded 
way.  Eventually you end up with so many tokens (several thousand) that 
the majority of the CPU time is spent traversing the tokens.  A quick 
look at the NTLM code suggests that this would behave in the same way.


The question now is what the "correct" way is to fix it - we could 
specifically avoid appending "token" notes in the Negotiate/NTLM code, 
or we could do something more generic in the absorb() method.  (My 
preference is the latter unless anyone can think why it would be a bad 
idea).


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Debugging slow access

2015-01-06 Thread Steve Hill

On 06.01.15 12:15, Steve Hill wrote:


Alternatively, A->absorb(B) could be altered to remove any notes from A
that have the same keys as B's notes, before using appendNewOnly() to
merge them?


I've implemented this for now in the attached patch and am currently 
testing it.  Initial results suggest it resolves the problem.


It introduces a new method, NotePairs::appendAndReplace(), which 
iterates through the source NotePairs and removes any NotePairs in the 
destination that have the same key, then calls append().


This is not the most efficient way of erasing the notes, because Squid's 
Vector template doesn't appear to have an erase() method.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
Index: source/src/Notes.cc
===
--- source/src/Notes.cc	(revision 354)
+++ source/src/Notes.cc	(working copy)
@@ -221,6 +221,22 @@
 }
 
 void
+NotePairs::appendAndReplace(const NotePairs *src)
+{
+for (Vector::const_iterator  i = src->entries.begin(); i != src->entries.end(); ++i) {
+Vector::iterator  j = entries.begin();
+	while (j != entries.end()) {
+	if ((*j)->name.cmp((*i)->name.termedBuf()) == 0) {
+		entries.prune(*j);
+		j = entries.begin();
+	} else
+		++j;
+	}
+}
+append(src);
+}
+
+void
 NotePairs::appendNewOnly(const NotePairs *src)
 {
 for (Vector::const_iterator  i = src->entries.begin(); i != src->entries.end(); ++i) {
Index: source/src/Notes.h
===
--- source/src/Notes.h	(revision 354)
+++ source/src/Notes.h	(working copy)
@@ -131,6 +131,12 @@
 void append(const NotePairs *src);
 
 /**
+ * Append the entries of the src NotePairs list to our list, replacing any
+ * entries in the destination set that have the same keys.
+ */
+void appendAndReplace(const NotePairs *src);
+
+/**
  * Append any new entries of the src NotePairs list to our list.
  * Entries which already exist in the destination set are ignored.
  */
Index: source/src/auth/User.cc
===
--- source/src/auth/User.cc	(revision 354)
+++ source/src/auth/User.cc	(working copy)
@@ -101,7 +101,7 @@
 debugs(29, 5, HERE << "auth_user '" << from << "' into auth_user '" << this << "'.");
 
 // combine the helper response annotations. Ensuring no duplicates are copied.
-notes.appendNewOnly(&from->notes);
+notes.appendAndReplace(&from->notes);
 
 /* absorb the list of IP address sources (for max_user_ip controls) */
 AuthUserIP *new_ipdata;
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl_crtd

2015-01-20 Thread Steve Hill

At the moment I'm running Squid 3.4 with bump-server-first using the
internal certificate generation stuff (i.e. not ssl_crtd).  I can't find
a lot of information about using/not using ssl_crtd so I was wondering
if anyone can give me a run-down of the pros and cons of using it
instead of the internal cert generator?

Thanks.

-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Steve Hill

On 21.01.15 08:40, Jason Haar wrote:


I'm running squid-3.4.10 on CentOS-6 and just got hit with ssl-bump
blocking/warning access to a website which I can't figure out why


Probably not very helpful, but it works for me (squid-3.4.10, Scientific 
Linux 6.6, bump-server-first, but not using ssl_crtd).  I also can't see 
anything wrong with the certificate chain.


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-01-21 Thread Steve Hill
On 21/01/15 18:39, Eliezer Croitoru wrote:

>> but not using ssl_crtd
> What are using if not ssl_crtd?

Squid generates the certificates internally if ssl_crtd isn't turned on
at compile time.  I've not seen any information explaining the pros and
cons of each approach (I'd welcome any input!).


-- 

 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-825748 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-824568 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-02-02 Thread Steve Hill

On 22.01.15 08:14, Amos Jeffries wrote:


Squid only *generates* server certificates using that helper. If you
are seeing the log lines "Generating SSL certificate" they are
incorrect when not using the helper.

The non-helper bumping is limited to using the configured http(s)_port
cert= and key= contents. In essence only doing client-first or
peek+splice SSL-bumping styles.


I'm pretty sure this is incorrect - I'm running Squid 3.4 without 
ssl_crtd, configured to bump server-first.  The cert= parameter to the 
http_port line points at a CA certificate.  When visiting an https site 
through the proxy, the certificate sent to the browser is a forged 
version of the server's certificate, signed by the cert= CA.  This 
definitely seems to be server-first bumping - if the server's CA is 
unknown, Squid generates an appropriately broken certificate, etc. as 
you would expect.


Am I missing something?

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl-bump doesn't like valid web server

2015-02-04 Thread Steve Hill

On 02.02.15 13:23, Eliezer Croitoru wrote:


On what OS are you running squid? is it self compiled one?


Scientific Linux 6.6.

And yes, it's a self-compiled Squid.

I'm quite happy to change to using the helper if that is the preferred 
method (until recently I was unaware that the helper existed).  Although 
I've got to admit that I was a bit surprised to be told that the way 
I've been successfully using Squid is impossible. :)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Dual-stack IPv4/IPv6 captive portal

2015-02-27 Thread Steve Hill


I'm wondering whether anyone has implemented a captive portal on a 
dual-stacked network, and whether they can provide any insight into the 
best way of going about it.



The problems:

- Networks are frequently routed with the proxy server on the border. 
This means the proxy doesn't get to see the client's MAC address, so 
captive portals have to work by associating the IP address with the 
user's credentials.


- In a dual-stacked environment, a clients' requests come from both its 
IPv4 address and IPv6 address.  Treating them independently of each 
other would lead to a bad user experience since the user would need to 
authenticate separately for each address.


- Where IPv6 privacy extensions are enabled, the client has multiple 
addresses at the same time, with the preferred address changing at 
regular intervals.  The address rotation interval is typically quite 
long (e.g. 1 day) but the change-over between addresses will occur 
spontaneously with the captive portal not being informed in advance. 
Again, we don't want to auth each address individually.


- Captive portals often want to support WISPr to allow client devices to 
perform automated logins.



Possible solutions:

- The captive portal page could include embedded objects from the 
captive portal server's v4 and v6 addresses.  This would allow the 
captive portal to temporarily link the addresses together and therefore 
link the authentication credentials to both.  The portal would still 
have to work correctly when used from single-stacked devices.  This also 
isn't going to work for WISPr clients since the client will never render 
the page when doing an automated login so we wouldn't expect any 
embedded objects to be requested.


- Using DHCPv6 instead of SLAAC to do the address assignment would 
disable IPv6 privacy extensions, which would be desirable in this case. 
 However, many devices don't support DHCPv6.


- The DHCP and DHCPv6 servers know the MAC and IPv[46] address of each 
client and could cooperate with each other to link this data together. 
However, the proxy does not always have control of the DHCP/DHCPv6 servers.



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dual-stack IPv4/IPv6 captive portal

2015-02-27 Thread Steve Hill

On 27.02.15 17:00, Michele Bergonzoni wrote:


This is true for v6 if the client uses its MAC as an identifier,
which it's not supposed to do and last time I checked was not true
for Windows, or if clients or DHCP relays support RFC6939, which is
quite new. See for example:

https://lists.isc.org/pipermail/kea-dev/2014-June/43.html


Oh, interesting - I hadn't realised that.


Have you thought about engineering your captive portal with a dual
stack DNS name (having both A and ), a v4 only and a v6 only, and
having you HTML embed requests with appropriate identifiers to
correlate addresses? Of course there are HTTP complications and it is
not perfect, but I guess that as long as it's a captive portal,
kludginess cannot decrease below some level.


That was one of my options.  However, it won't work in the case of WISPr 
auto-logons because the page wouldn't be rendered by the client, so you 
wouldn't expect it to fetch embedded bits either.



I am really interested to hear what people are doing in the field of
squid-powered captive portals, even more when interoperating with
iptables/ip6tables.


At the moment, we've written a hybrid captive portal/http-auth system. 
Essentially, we use HTTP proxy auth where we can and a captive portal 
where we can't.  HTTP proxy auth is preferable because every request 
gets authenticated individually and we can use Kerberos.  Unfortunately 
a lot of software doesn't support it properly (I'm looking at you, apple 
and google, although everyone else is getting pretty bad at it too) and 
it also can't be used for transparent proxying (and again, a lot of 
software just doesn't bother to support proxies these days, and it's 
only getting worse).  So we use the user-agent string to try and 
identify the clients we can safely authenticate, and the rest rely on 
cached credentials or captive portal.


Yes, it's a horrible bodge, but unfortunately that's where modern 
software is driving us. :(  For iOS and Android you can pretty much 
forget using pure HTTP proxy authentication.  Luckily iOS can use WISPr 
to automatically log into a portal, sadly vanilla Android still doesn't 
include a WISPr client (I'd put money on this being down to patents!).



--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dual-stack IPv4/IPv6 captive portal

2015-03-03 Thread Steve Hill

On 02.03.15 02:33, Amos Jeffries wrote:


  These people are plain wrong about how the basic protocol works and yet
they are treated with must-accept policies by so many networks.


Yep, one of the really big problems we have is the "it works when we're 
not using the proxy, so the proxy must be broken" attitude, when almost 
universally the proxy is working fine and the other software is just 
plain broken.  It's really hard to convince a customer that it really 
isn't our fault when some app breaks, especially when that app is made 
by someone like Apple or Google (who, of course, can *never* be wrong!)


The vast majority of our support time is spent figuring out ways to work 
around busted end-user software, because we know saying "Apple's 
software is broken, go and talk to Apple" isn't going to work because 
the likes of Apple have no interest in actually supporting their own 
customers and somehow this ends up being "our fault".  (Not just Apple - 
lots of other companies are equally bad, although Apple have currently 
hit a nerve with me due to a lot of debugging I recently had to do with 
their appstore because they didn't bother to log any errors when things 
broke, which also seems to be par for the course these days).



  Imagine what would happen if you MUST-accept all emails delivered? or
any kind of DNS response they chose to send you? those are two other
major protcols with proxies that work just fine by rejecting bad
messages wholesale.


Well, you say that, but we also get "it works at home but not at work" 
complaints when DNS servers start returning broken data.  Admittedly we 
usually seem to be able to not catch quite so much blame for that one, 
although I'm not sure how. :)


Basically, in my experience, if it works in situation A and not in 
situation B people will assume that the problem is whatever is different 
in situation B rather than that both situations are completely valid but 
their application is broken and can't handle one of them.  This becomes 
a big problem when situation A is the more prevalent one - at that point 
you either start working around the buggy software, or you lose a 
customer and get a reputation for selling "broken" stuff.


So whilst I agree with you that in an ideal world we wouldn't work 
around stuff, we would just report bugs and the broken software would be 
fixed, in the real world the big mainstream businesses aren't interested 
in supporting their customers and yet somehow the rest of us end up 
having to do it for them or it reflects badly on *us*. 



FWIW, I am always happy to work with other people/companies to help them 
fix their broken stuff.  This has been met with a mix of responses - 
sometimes they are happy to work with me to fix things, which is great, 
but sadly not the most common experience.  Often I send a detailed bug 
report, explaining what's going wrong, referencing standards, etc. and 
get a "you're wrong, we're right, we're not going to change anything" 
response, which would be fine if they referenced anything to back up 
their position, but they never do.  Many simply ignore the reports 
altogether.  Then we have people like Microsoft, who I've tried to 
contact on several occasions to report bugs in their public-facing web 
servers - there are no suitable contact details ever published and I've 
been bounced from department to department with no one quite sure what 
to do with someone reporting problems with their _public_ servers and 
not having some kind of support contract with them (I've got no 
resolution to any of the problems I reported to them because I've never 
actually managed to get my report to anyone responsible).  I've given up 
reporting bugs to Apple because they always demand that I spend a lot of 
my time collecting debug logs, but then they sit on the report and never 
actually fix it (again, I've never had a resolution to a bug I've 
reported to Apple, despite supplying them with extensive debugging).



/rant :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] i hope to build web Authentication portal at Tproxy environment recenty , can you give me some advisement .

2015-03-11 Thread Steve Hill

On 11.03.15 10:22, johnzeng wrote:


whether php or jquery need send user ip address to squid ? otherwise i
worried whether squid can confirm user info

and how to identify and controll http traffic ?


I'd do this with an external ACL - when processing a request, Squid 
would call the external ACL which would do:


1. If the user is not authenticated or their "last seen" timestamp has 
expired, return "ERR"
2. If the user is authenticated, update their "last seen" timestamp and 
return OK.


Obviously if the ACL returns ERR, Squid needs to redirect the user to 
the authentication page.  If the ACL returns OK, Squid needs to service 
the request as normal.


The authentication page would update the database which the external ACL 
refers to.


Identifying the user's traffic would need to be done by MAC address or IP:
 - MAC address requires a flat network with no routers between the 
device and Squid.

 - IP has (probably) unfixable problems in a dual-stacked network.

Beware that:
1. Access to the authentication page must be allowed for unauthenticated 
users (obviously :)
2. Authentication should really be done over HTTPS with a trusted 
certificate.
3. Clients require access to some external servers to validate HTTPS 
certs before they have authenticated.

4. If you want to support WISPr then (2) and (3) are mandatory.
5. External ACL caching

You might be able to do it with internal ACLs, but... pain :)

--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Assert(call->dialer.handler == callback)

2015-04-30 Thread Steve Hill
sion opcode 0xf3

) at client_side_request.cc:1935
#35 0x7ffe14abbcaa in JobDialer::dial 
(this=0x7ffe1ce04990, call=...) at ../../src/base/AsyncJobCalls.h:174
#36 0x7ffe149bea69 in AsyncCall::make (this=0x7ffe1ce04960) at 
AsyncCall.cc:40
#37 0x7ffe149c272f in AsyncCallQueue::fireNext (this=Unhandled dwarf 
expression opcode 0xf3

) at AsyncCallQueue.cc:56
#38 0x7ffe149c2a60 in AsyncCallQueue::fire (this=0x7ffe16f70bf0) at 
AsyncCallQueue.cc:42
#39 0x7ffe1484110c in EventLoop::runOnce (this=0x7fffcb8c4be0) at 
EventLoop.cc:120
#40 0x7ffe148412c8 in EventLoop::run (this=0x7fffcb8c4be0) at 
EventLoop.cc:82
#41 0x7ffe148ae191 in SquidMain (argc=Unhandled dwarf expression 
opcode 0xf3

) at main.cc:1511
#42 0x7ffe148af2e9 in SquidMainSafe (argc=Unhandled dwarf expression 
opcode 0xf3

) at main.cc:1243
#43 main (argc=Unhandled dwarf expression opcode 0xf3
) at main.cc:1236

(sorry about the DWARF errors - it looks like I've got a version 
mismatch between gcc and gdb)


--
 - Steve Hill
   Technical Director
   Opendium Limited http://www.opendium.com

Direct contacts:
   Instant messager: xmpp:st...@opendium.com
   Email:st...@opendium.com
   Phone:sip:st...@opendium.com

Sales / enquiries contacts:
   Email:sa...@opendium.com
   Phone:+44-1792-824568 / sip:sa...@opendium.com

Support contacts:
   Email:supp...@opendium.com
   Phone:+44-1792-825748 / sip:supp...@opendium.com
<>___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users