On 6/10/2016 11:27 a.m., KR wrote:
> Hello Amos,
>
>
>> On Oct 5, 2016, at 9:07 AM, Amos Jeffries wrote:
>>
>> On 5/10/2016 6:48 a.m., KR wrote:
>>> I uncommented that line and now I get
>>>
>>> Initializing the Squid cache with the command squid3 -f
>>> /etc/squid/squid.conf -z ..
>>>
>>
>> Hm
- Original Message -
> From: Marc
> Mimicing in openssl (well.. not perfect but it joes the job I guess):
> openssl s_client -quiet -connect www.google.com:443 -tls1 -cipher
> RC4-MD5:RC4-SHA:DES-CBC3-SHA:DES-CBC-SHA:EXP1024-RC4-SHA:EXP1024-DES-CBC-SHA:EXP-
> RC4-MD5:EXP-RC2-CBC-MD5:DHE
On 6/10/2016 11:56 a.m., Jose Torres-Berrocal wrote:
> Correcting typo:
>
> And placing it inside a whitelist.acl file:
> acl whitelist2 dstdom_regex -i "whitelist.acl"
>
> Where whitelist.acl content:
> ^familymedicinepr\.com$
> ^mail\.yahoo\.com$
> ^neodecksoftware\.com$
> ^office\.net$
> \.fam
On 10/05/2016 05:49 PM, squid-us...@filter.luko.org wrote:
>> See "early return"
>> statements in clientReplyContext::processReplyAccess(), including:
>>
>>> /** Don't block our own responses or HTTP status messages */
>>> if (http->logType.oldType == LOG_TCP_DENIED ||
>>> http-
Alex,
> However, there is a difference between my August tests and this thread.
> My tests were for a request parsing error response. Access denials do not
> reach the same http_reply_access checks! See "early return"
> statements in clientReplyContext::processReplyAccess(), including:
>
> >
Lets try again:
acl whitelist1 dstdomain .familymedicinepr.com .mail.yahoo.com
.neodecksoftware.com .office.net
=
acl whitelist2 dstdom_regex ^familymedicinepr\.com$ ^mail\.yahoo\.com$
^neodecksoftware\.com$ ^office\.net$ \.familymedicinepr\.com$
\.mail\.yahoo\.com$ \.neodecksoftware\.com$ \.offic
Correcting typo:
And placing it inside a whitelist.acl file:
acl whitelist2 dstdom_regex -i "whitelist.acl"
Where whitelist.acl content:
^familymedicinepr\.com$
^mail\.yahoo\.com$
^neodecksoftware\.com$
^office\.net$
\.familymedicinepr\.com$
\.mail\.yahoo\.com$
\.neodecksoftware\.com$
\.office\.n
Well.. it looks like the issue I'm having (subject: handshake problems
with stare and bump).
IE8 on XP sends out:
Secure Sockets Layer
SSL Record Layer: Handshake Protocol: Client Hello
Content Type: Handshake (22)
Version: TLS 1.0 (0x0301)
Length: 104
Handshak
On 10/05/2016 02:59 PM, Jose Torres-Berrocal wrote:
> Please confirm equivalence:
>
> 1.
> acl whitelist1 dstdomain .familymedicinepr.com .mail.yahoo.com
> .neodecksoftware.com .office.net
> =
> acl whitelist2 dstdom_regex ^familymedicinepr\.com$ ^mail\.yahoo\.com$
> ^neodecksoftware\.com$ ^office
Please confirm equivalence:
1.
acl whitelist1 dstdomain .familymedicinepr.com .mail.yahoo.com
.neodecksoftware.com .office.net
=
acl whitelist2 dstdom_regex ^familymedicinepr\.com$ ^mail\.yahoo\.com$
^neodecksoftware\.com$ ^office\.net$
OR
2.
acl whitelist1 dstdomain .familymedicinepr.com .mail.
On 10/05/2016 01:15 PM, Jose Torres-Berrocal wrote:
> I would like to know how
> I should enter the domains as to make it work correctly using
> dstdom_regex behaving like dstdomain
To map any leaf FQDN "foo.bar.baz":
1. start with "^";
2. add "foo.bar.baz" where every period is escaped with
Hey Anthony,
I have used apt-cacher-ng, but it can't save git repos or npm repos. Also i
have used apt-cacher-ng, it used to work great until 12.02 but when we had
started to have mixed setup [ ubuntu 13,14.04 and others ] we got issues
within our setup and one point issues became so daily we deci
The situation is that I am using squid on the pfsense firewall. Squid
is available as a package with GUI interface. The whitelist is part
of the sections provided by the GUI and somehow entering the domains
as a list that I provided it does work for most of the domains but it
fails in others. Th
On Wednesday 05 October 2016 at 20:40:46, Hardik Dangar wrote:
> Hey Jok,
>
> Thanks for the suggetion but the big issue with that is i have to download
> whole repository about ( 80-120 GB ) first and then each week i need to
> download 20 to 25 GB.
This is not true for apt-cacher-ng. You inst
>> Should "intercept" work with IPv6 on NetBSD 7-STABLE and IPFilter 5.1?
Okay, we have "fixed" Squid interception, and IPFilter in the kernel,
and now it's working good. But did we do it in the right way?
While reading ip_nat.c in IPFilter, I found that SIOCGNATL - and its
function called ipf_na
Hey Jok,
Thanks for the suggetion but the big issue with that is i have to download
whole repository about ( 80-120 GB ) first and then each week i need to
download 20 to 25 GB. We hardly use any of that except few popular repos.
big issue i always have with most of them is third party repo's.
sq
On 6/10/2016 5:31 a.m., Nilesh Gavali wrote:
> <> here is the compete squid.conf for your reference-
>
> #
> # Recommended minimum configuration:
> AD SSO Integration #
> #auth_param negotiate program /usr/lib64/squid/squid_kerb_auth -d -s
> GSS_C_NO_NAME
> auth_param negotiate program
d only
cache requests for
particular type of file given in refresh_pattern
What do you think would be easiar ? and how do i work on squid source to
do
above? any hint is appreciated.
One more thing can you tell me if we are already violating http via
options
like nocache, ignore-no-store igno
Hi,
Thanks for the replies. I've figured out more details. First, my
assumption that sslproxy_cipher was ignored in my setup was incorrect.
I confused it with what I've read about sslproxy_options on
http://bazaar.launchpad.net/~yadi/squid/warnings/revision/13928 .
Thanks Yuri for making me come t
This is sort of off-topic, but have you considered using a deb repo
mirroring software?
(it would mean that you need to update your clients to point to that rather
than google, but that's not really difficult).
software like aptly (aptly.info) are really good about this (though a
little hard to get
> /var/squid/acl/whitelist.acl:
[snip]
>
> .assertus.com
> .neodecksoftware.com
your whitelist for this domain says that it has "something" followed
by that domain name...
>
>
> .office.net
1. Each domain is on separate line, why is consider the next line part
of the same pattern?
in the end,
On 10/05/2016 08:33 AM, Hardik Dangar wrote:
> One more thing can you tell me if we are already violating http via
> options like nocache, ignore-no-store ignore-private ignore-reload, why
> can't we do the same for Vary header ?
We can, but ignoring Vary requires more/different work than adding
First goes first is to restart the client machine to verify that the
certificate is installed.
If you want a list of "banned" ssl sites you will need to do some research on
your clients needs...
Nobody can do your work for you without knowing your "thing".
The overall slow down is from both sites
Hey Amos,
oh, i actually built archive mode squid by getting help at here,
http://bugs.squid-cache.org/show_bug.cgi?id=4604
I was thinking if we have option vary_mode just like archive mode to set it
for particular domain like,
acl dlsslgoogle srcdomain dl-ssl.google.com
vary_mode allow dlsslgoo
On 10/05/2016 06:28 AM, ama...@tin.it wrote:
> I'm using squid-3.5.21-20160908-r14081 and for the first time I
> have configuration squid-smp (4workers and cache_dir rock).
> 2016/10/05 14:12:55 kid4| Bug: Missing MemObject::storeId value
> Is it a misconfiguration ?
It is a known bug: http:/
On 6/10/2016 12:09 a.m., john jacob wrote:
> Hi All,
>
> We have a requirement to use the same Squid instance for Basic and NTLM
> authentication to serve various customer groups (may not be on different
> network sections). The customer groups which are using Basic authentication
> (for legacy re
On 5/10/2016 11:27 p.m., Hardik Dangar wrote:
> Hey Amos,
>
> I have implemented your patch at
>
> and added following to my squid.conf
> archive_mode allow all
>
> and my refresh pattern is,
> refresh_pattern dl-ssl.google.com/.*\.(deb|zip|tar|rpm) 129600 100% 129600
> ignore-reload ignore-no-s
On 5/10/2016 7:00 a.m., Nilesh Gavali wrote:
> Hi Amos;
> Ok, we can discussed the issue in Two part 1. For Windows AD
> Authentication & SSO and 2. Linux server unable to access via squid proxy.
>
> For First point-
> Requirement to have SSO for accessing internet via squid proxy and based
> o
On 5/10/2016 6:48 a.m., KR wrote:
> I uncommented that line and now I get
>
> Initializing the Squid cache with the command squid3 -f /etc/squid/squid.conf
> -z ..
>
Hmm. The 'squid3' package should have config files at /etc/squid3/*
The 'squid' package has config files at /etc/squid/*
> FAT
Hello
I'm using squid-3.5.21-20160908-r14081 and for the first time I
have configuration squid-smp (4workers and cache_dir rock).
I haven't
cache_peer.
In cache.log often I found (always on different kid id):
2016/10/05 14:12:55 kid4| Bug: Missing MemObject::storeId value
2016/10/05 14:12:55
Hi All,
We have a requirement to use the same Squid instance for Basic and NTLM
authentication to serve various customer groups (may not be on different
network sections). The customer groups which are using Basic authentication
(for legacy reasons) should not receive NTLM scheme and the customer
Hi All,
We have a requirement to use the same Squid instance for Basic and NTLM
authentication to serve various customer groups (may not be on different
network sections). The customer groups which are using Basic authentication
(for legacy reasons) should not receive NTLM scheme and the customer
Hey Amos,
I have implemented your patch at
and added following to my squid.conf
archive_mode allow all
and my refresh pattern is,
refresh_pattern dl-ssl.google.com/.*\.(deb|zip|tar|rpm) 129600 100% 129600
ignore-reload ignore-no-store override-expire override-lastmod ignor$
but i am still not a
33 matches
Mail list logo