-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
13.03.15 2:37, Mukul Gandhi пишет:
> On Thu, Mar 12, 2015 at 11:04 AM, Yuri Voinov
> wrote:
>
> You only have external helper (which is must wrote yourself) in
> 3.4.x.
>
>
>> Are there any examples that I can look at to implemented this
>> ex
Hello,
I am trying to set up a Captive Portal with Squid (v.3.5.2) in Intercept
mode and SquidGuard (v.1.5) as URL rewriter. The Captive portal works off
usernames in a database, but Squid + SquidGuard work based off IP's.
The most progress I have had just says Authentication by Squid cannot be
d
Do you get any more details when you start the wrapper with –d ?
Markus
"Donny Vibianto" wrote in message
news:CAC49LV6SRXbiFcGxqZgAoaHPj1qeifERtSN63ZrDsa_b=iw...@mail.gmail.com...
anyone please...?
On Sat, Mar 7, 2015 at 10:02 PM, Donny Vibianto
wrote:
Hi Guys,
After two weeks succe
On Thu, Mar 12, 2015 at 11:04 AM, Yuri Voinov wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> You only have external helper (which is must wrote yourself) in 3.4.x.
>
>
Are there any examples that I can look at to implemented this external
helper for doing selective ssl_bumps. And wh
dooh. Solved. iptables forwarded https traffic to 3129 - not 3130.. :)
Thank you for your kind assistance
[CUT]
--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200
"Those who do not understand Unix are condemned to reinvent it, poorly."
--Henry Spencer
___
Hey Hack,
I wsa talking about radius server like free radius.
Which by the way dmasoftlab uses in their product\s.
Eliezer
On 12/03/2015 07:14, HackXBack wrote:
are you talking about radius server like free radius ?
or like dmasoftlab.com ?
___
sq
On 01/23/2013 10:39 pm, Amos Jeffries wrote:
On 24/01/2013 4:13 a.m., dweimer wrote:
On 2013-01-23 08:40, dweimer wrote:
On 2013-01-22 23:30, Amos Jeffries wrote:
On 23/01/2013 5:34 a.m., dweimer wrote:
I just upgraded my reverse proxy server last night from 3.1.20 to
3.2.6, all is working we
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
You only have external helper (which is must wrote yourself) in 3.4.x.
Works with domains in ssl bump fully available at least 3.5.x
12.03.15 21:01, Mukul Gandhi пишет:
> I am running squid 3.4.8 and am looking for solutions to ssl_bump
> for specifi
I am running squid 3.4.8 and am looking for solutions to ssl_bump for
specific domains only. Going through the archives it is clear that it is
not possible unless the reverse DNS points back to the domain that is to be
ssl bumped.
So then what is the solution to this problem. I just want to create
I think I found it..
trying to run ssl_crtd myself to issue a cert it says:
Error while parsing the crtd request: Broken signing certificate!
shouldn't that end up in squid logs as well?
Klavs Klavsen wrote on 03/12/2015 03:48 PM:
I just found the config, stating that ssl-bump is only support
I just found the config, stating that ssl-bump is only supported in
intercept mode.. that invalides accel :)
I setup a client on same LAN as squid, and told it to use squid box as
default gw. for traffic to public addresses..
intercept on port 80 works fine.
on https however I get an SSL con
Amos Jeffries wrote on 03/12/2015 02:27 PM:
On 13/03/2015 1:52 a.m., Klavs Klavsen wrote:
I'd rather not have to route everything (incl. normal ingoing web
traffic) through the squid box.. and the firewalls are proprietary stuff
- so can't install squid there :)
You don't, port 80 TCP is all t
On 13/03/2015 1:52 a.m., Klavs Klavsen wrote:
> I'd rather not have to route everything (incl. normal ingoing web
> traffic) through the squid box.. and the firewalls are proprietary stuff
> - so can't install squid there :)
You don't, port 80 TCP is all that *needs* it, and only for the traffic
f
On 10/03/2015 10:39 p.m., HackXBack wrote:
> this is my configure option , what may cause the problem
>
> ./configure --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
> --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var
> --libdir=/usr/lib --includedir=/usr/include --datad
I'd rather not have to route everything (incl. normal ingoing web
traffic) through the squid box.. and the firewalls are proprietary stuff
- so can't install squid there :)
It works fine in accel mode.. and I can limit what urls each client ip
is able to access, and disable caching..
Shouldn
how to rebuilt without symbol stripping
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/FATAL-xcalloc-Unable-to-allocate-18446744073487757627-blocks-of-1-bytes-tp4670271p4670366.html
Sent from the Squid - Users mailing list archive at Nabble.co
On 13/03/2015 12:27 a.m., Klavs Klavsen wrote:
> Klavs Klavsen wrote on 03/12/2015 12:15 PM:
>>
>> the routing example didn't seem to work :(
>>
> As I understand it.. I can't use DNAT on client machine to get packages
> to squid box.. and since it's locally generated packages(ie. I want to
> captu
On Thursday 12 March 2015 at 12:46:36 (EU time), James Harper wrote:
> > Ah. That is a bug then. The -i bit is not supposed to be treated as a
> > pattern.
>
> Even when I put it in []'s? I think the mistake was mine.
There was no [] in your original posting of your conf file...
On Thursday 12
> >
> > Found it. Really stupid mistake. The documentation shows [-i] for
> > case insensitivity, but I hadn't picked up that the [] around the -i
> > indicated that it was optional. I had just cut and pasted from
> > examples. So the .cab thing was irrelevant - it just happened that
> > the .cab f
On 13/03/2015 12:30 a.m., James Harper wrote:
>>
>> I also tried the same thing with http_access and that works as expected -
>> *.psf files are allowed, non *.psf file are denied. I'm thinking bug at the
>> point... I'll do some more testing and see if I can narrow it doen.
>>
>
> Found it. Reall
> Maybe if you are allocating large cache_mem (I think page-pool.shm
> was
> shared cache_mem). On 32-bit the 2(4?) GB RAM limit needs to cover
> everything including virtual memory IIRC.
>
> Amos
Yes you are right it's good now !
___
squid-users mail
>
> I also tried the same thing with http_access and that works as expected -
> *.psf files are allowed, non *.psf file are denied. I'm thinking bug at the
> point... I'll do some more testing and see if I can narrow it doen.
>
Found it. Really stupid mistake. The documentation shows [-i] for ca
Klavs Klavsen wrote on 03/12/2015 12:15 PM:
the routing example didn't seem to work :(
As I understand it.. I can't use DNAT on client machine to get packages
to squid box.. and since it's locally generated packages(ie. I want to
capture on the clients - instead of capturing on their default
On 12/03/2015 10:41 p.m., FredB wrote:
>
>>
>> mising whitespace separator between those options.
>>
>> '--enable-icap-client' '--enable-follow-x-forwarded-for'
>> '--enable-basic-auth-helpers=LDAP,digest'
>> '--enable-digest-auth-helpers=ldap,password'
>>
>> Syntax: --enable-auth-TYPE=HELPER,LIST
Amos Jeffries wrote on 03/12/2015 11:59 AM:
If your intention here is to get around a broken firewall devices port
80/443 inspection then I expect you require two proxies anyway. The
traffic has to be on a different port entirely which is not being
mangled by the firewall.
I've gotten an OK - fo
On 12/03/2015 10:28 p.m., HackXBack wrote:
> H, Thanks Amos ,
> Then what [squid -F] do then ?
It enforces Squid not being able to answer client requests during that
step #4 operation. Its a possibility to avoid manually directing traffic
at the proxy when step #4 is completed. Especially if
If your intention here is to get around a broken firewall devices port
80/443 inspection then I expect you require two proxies anyway. The
traffic has to be on a different port entirely which is not being
mangled by the firewall.
One before and one after the firewall, with packets flowing over the
> Three things;
>
> * by re-writing you are generating an entirely new request with the
> apt-cacher server URL as the destination. The HTTP message details about
> what was originally requested and from where is *gone* when the traffic
> leaves for the server. The solution for that is outlined at
On 12/03/2015 9:14 p.m., James Harper wrote:
> I have just noticed that urlpath_regex isn't doing what I want:
>
> acl wuau_repo dstdomain .download.windowsupdate.com
> acl wuau_path urlpath_regex -i \.psf$
> acl dst_server dstdomain server
> acl apt_cacher browser apt-cacher
>
> cache deny dst_s
>
> mising whitespace separator between those options.
>
> '--enable-icap-client' '--enable-follow-x-forwarded-for'
> '--enable-basic-auth-helpers=LDAP,digest'
> '--enable-digest-auth-helpers=ldap,password'
>
> Syntax: --enable-auth-TYPE=HELPER,LIST
>
> If you dont use that syntax to explicitl
H, Thanks Amos ,
Then what [squid -F] do then ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/keep-data-after-delete-swap-state-tp4670344p4670351.html
Sent from the Squid - Users mailing list archive at Nabble.com.
As i Understand intercept - that will only work (as you said) when NAT
is performed on the box that is to intercept (when I remove haproxy -
that means the squid box itself).
and I'm going to move the squid box to the same network as the
webservers - to be able to do it the routing way.
It s
Hello Dear Eliezer:
Although
we are running two different process for wccp redirection and
Cache operation.
but There are the problem still , i guess wccp need check rock and
aufs' health ,
Maybe rock is incompatible with wccp .
(
problem :
I face a probl
I have just noticed that urlpath_regex isn't doing what I want:
acl wuau_repo dstdomain .download.windowsupdate.com
acl wuau_path urlpath_regex -i \.psf$
acl dst_server dstdomain server
acl apt_cacher browser apt-cacher
cache deny dst_server
cache deny apt_cacher
cache deny wuau_repo
cache allow
Hi Amos,
Thank you for the walkthrough..
Instead of having to play with tproxy on haproxy currently, I figured
i'd try a simpler route..
The purpose of this setup, is to "jump around" a firewall issue with a
sh#! firewall, which in order to filter http and https traffic
appearently drops 5-
2015-03-12 2:09 GMT+01:00 Amos Jeffries :
> On 12/03/2015 10:26 a.m., Grzegorz Falkowski wrote:
> > Hello,
> > I plan to use sclamav with c-icap to secure web app from malware threat.
> > I prepare whole configuration and it's work fine. Unfortunately in first
> > stage of implementation it should
hmm..
I'm trying to follow this on a test client (haven't gotten it working yet):
http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute
(where squid is amongst the internal clients - actually on it's own vlan
- but it's not the default route)
but this won't work:
ip route a
37 matches
Mail list logo