>
> Hi all,
>
>
> I'm setting up a system for using iPads in our school, and I'm stuck a bit on
> tracking what the students are doing on them.
>
>
> First up, I reaaly don't want a Pop-up login box from a 407 response from a
> proxy server, so I'm looking for some other way to track who is do
>
> We run squid 3.5.6 in a proxy server with FreeBSD 9.3.
> Squid is the only way out, there is no transparency at all.
> We have problems with windows update through squid.
>
Problems without doing anything with Squid, or problems trying to get Squid to
actually cache windows updates?
At hom
>
> without "www.*" -->> Forbidden You don't have permission to access / on
> this
> server.
>
Some browsers (Chrome?) will "help" by prepending www on the front for you...
if you type just "squid-cache.org" it will turn it into
http://www.squid-cache.org and you won't see the problem. Maybe
>
> It's possible to redirect all ports to squid ? thru iptables ?
> For example port 25 smtp,143 imap, etc...
> Can squid handle that. In transparent mode.
Yes. Kind of. You need:
. An appropriate rule in iptables nat table that ends with -j REDIRECT
--to-ports 3129 (or whatever port you are li
>
> Does it have sense to keep trying to do https interception with the arrive of
> pinning and all that things that prevent this kind of activities ?
>
I think if you give it some time there will be commercial pressure to allow
override of pinning.
I mean, you are only ever going to do SSL in
> No, adding Basic is not an option because I will have to provide
> special "proxy passwords" to the users, or make them enter their
> Windows passwords by hand. This is highly undesirable. Once they
> logon into Windows, they must have (or not have) Web access
> transparently.
>
> If you know ho
Doing a search on the main squid page gives me this:
The requested URL /cgi-bin/swish-query.cgi was not found on this server.
Maybe better doing a google search anyway?
James
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.s
Just reading up on this, the Feature page
http://wiki.squid-cache.org/Features/SslPeekAndSplice says:
"... with Squid shoveling TCP bytes back and forth without any decryption"
I can't see that squid actually uses the splice() system call, so that would
mean squid would actually read the data i
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 17/10/2014 9:47 p.m., James Harper wrote:
> > Just reading up on this, the Feature page
> > http://wiki.squid-cache.org/Features/SslPeekAndSplice says:
> >
> > "... with Squid shoveling TCP byt
>
> It looks like this question has come up before, but I'm hoping to get some
> further details on it.
>
> I've used a couple of firewalls (Watchguard & Fortigate) that allow me to do a
> level of HTTPS site filtering without decryption. I believe that it works by
> requesting and examining the
I've written a little helper to do ssl callouts to determine if the server is
running ssl at all (eg not tunnelling over ssl), and also to be able to do
limited ACL on CN/SAN. The main limitation is the way larger organisations will
often have one SSL cert that covers many URLS (eg google cert a
>
> On 11/24/2014 02:43 PM, Kinkie wrote:
> > Hi Eliezer, I don't think so. PACfiles have no access to the DOM or
> > facilities like AJAX, and are very limited in what they can return
> > or affect as side-effects. In theory it could be possible to do
> > something, but in practice it would be on
I've been getting squid crashes with squid-3.5.0.2-20141031-r13657. Basically I
think my cache got corrupt - started seeing TCP_SWAPFAIL_MISS and md5
mismatches.
Config is cache_dir ufs /usr/local/squid/var/cache/squid 102400 16 256
It's possible that at one point I might have started 2 instance
This has happened again a day or so after wiping the cache directory. Core dump
this time:
#0 StoreEntry::checkCachable (this=this@entry=0x284c440) at store.cc:962
962 getReply()->content_length > store_maxobjsize) ||
(gdb) bt
#0 StoreEntry::checkCachable (this=this@entr
> > It's possible that at one point I might have started 2 instances of
> > squid running at once... could that cause corruption?
>
> Yes, very likely. More so the longer they were both running.
>
> I see you mention segfaults below, that can also cause it for any
> objects in use at the time of
>
> On IE, the error is :the proxy server is not responding"
> On Chrome: "ERR_SSL_PROTOCOL_ERROR"
> On Firefox "ssl_error_rx_record_too_long"
>
> If I bypass the proxy and go direct to the internet through our firewall, it
> works fine.
>
> This suggests to me, without having any errors in squi
I have a rewrite rule so that any request for a list of apt repositories (acl
dstdomain) are rewritten to instead go to my apt-cacher server, and then a
"cache deny" rule to make sure squid doesn't cache files from these
repositories. This seemed to be working fine but my latest attempt at a deb
The following "works" for me:
# intercept for transparent proxy of ssl connections
https_port 3130 name=transproxyssl intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
cert=/usr/local/squid/etc/ca.pem
# just testing with my laptop
acl james_src arp 11:11:11:11:11:
Suppose hostname hostsite.com resolves to 2 different IP addresses. If a client
in transparent mode retrieves a page (which is then cached by squid), and
another client in transparent mode retrieves the same page but from the other
IP address, is the page served from the cache?
If not, is there
>
> Probably non-HTTPS protocol being used.
>
> As bumping gets more popular we are hearing about a number of services
> abusing port 443 for non-HTTPS protocols on the false assumption that
> the TLS layer goes all the way to the origin server without
> inspection. That has never been a true ass
>
> On 01/01/15 00:11, James Harper wrote:
> > The helper connects to the IP:port and tries to obtain the certificate, and
> then caches the result (in an sqlite database). If it can't do so within a
> fairly
> short time it returns failure (but keeps trying a bit longe
I have just noticed that urlpath_regex isn't doing what I want:
acl wuau_repo dstdomain .download.windowsupdate.com
acl wuau_path urlpath_regex -i \.psf$
acl dst_server dstdomain server
acl apt_cacher browser apt-cacher
cache deny dst_server
cache deny apt_cacher
cache deny wuau_repo
cache allow
> Three things;
>
> * by re-writing you are generating an entirely new request with the
> apt-cacher server URL as the destination. The HTTP message details about
> what was originally requested and from where is *gone* when the traffic
> leaves for the server. The solution for that is outlined at
>
> I also tried the same thing with http_access and that works as expected -
> *.psf files are allowed, non *.psf file are denied. I'm thinking bug at the
> point... I'll do some more testing and see if I can narrow it doen.
>
Found it. Really stupid mistake. The documentation shows [-i] for ca
> >
> > Found it. Really stupid mistake. The documentation shows [-i] for
> > case insensitivity, but I hadn't picked up that the [] around the -i
> > indicated that it was optional. I had just cut and pasted from
> > examples. So the .cab thing was irrelevant - it just happened that
> > the .cab f
> Hey,
>
> I have written a basic idea with a php "login portal" that can be seen at:
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/Conf
> http://wiki.squid-cache.org/EliezerCroitoru/SessionHelper/PhpLoginExample
> http://w
26 matches
Mail list logo