Not much reaction on my statements so far.
So let me ask: is anyone actually receiving a valid 404-http-response from
squid 3.5.x when trying to download a file which does not exist on the
FTP-server?
I don't and I think it's a bug with 3.5.x and I wonder why I seem to be the
only one facing it.
On 1/07/2015 8:52 a.m., Randal Cowen wrote:
> For years I've been successfully running a squid. Last Wednesday the 17th
> magically only HTTPS requests fail over only AT&T's cellular network
>
> Everything still works great on any other land-line provider I've tested
> including AT&T's DSL ser
On 1/07/2015 5:08 a.m., Alex Wu wrote:
> /*
> You could assign two workers, each with a different http_port and
> ssl_crtd helper using different cert databases.
>
> */
>
> How to do this? It sounds it might meet our need.
>
at the top of squid.conf place:
workers 2
if ${process_number} =
On 1/07/2015 6:21 a.m., Chris Greene wrote:
> I’ve had Squid running on Ubuntu for a few weeks. I’d configured the
> proxy settings in the browsers. Everything has been working well and
> I've been pleased with the results. But now I need to make this a
> transparent proxy and I’m running into t
I suggest to read this:
https://support.google.com/websearch/answer/186669
and look at option 3 of section 'Keep SafeSearch turned on for your network'
Marcus
On 06/30/2015 05:48 PM, Mike wrote:
Scratch that (my previous email to this list), google disabled their insecure
sites when used as
Hello Mike,
May be it is time to take a look at ICAP/eCAP protocol implementations which
target specifically this problem - filtering within the *contents* of the
stream on Squid?
Best regards,
Rafael
-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.or
For years I've been successfully running a squid. Last Wednesday the 17th
magically only HTTPS requests fail over only AT&T's cellular network
Everything still works great on any other land-line provider I've tested
including AT&T's DSL service. Typically my logs show
1435691713.787 240084 T
Scratch that (my previous email to this list), google disabled their
insecure sites when used as part of a redirect. We as individual users
can use that url directly in the browser (
http://www.google.com/webhp?nord=1 ) but any google page load starts
with secure page causing that redirect to f
Thanks for the quick reply. I managed to fix it, by removing my old
rpmbuild directory and starting again, and of course making sure that
gcc-c++ was installed (which it wasn't!)
Cheers
Alex
On 30/06/15 20:34, Eliezer Croitoru wrote:
If you will look on the configure options I have used in th
On 2015-06-30 12:21 PM, Chris Greene wrote:
I’ve had Squid running on Ubuntu for a few weeks. I’d configured the
proxy settings in the browsers. Everything has been working well and
I've been pleased with the results. But now I need to make this a
transparent proxy and I’m running into trouble
If you will look on the configure options I have used in the RPMs you
would have seen that I changed\removed a helper or two from the build.
I didn't had time to inspect the issue yet.
How do you rebuild from SRPM?(important)
Eliezer
On 30/06/2015 21:48, Alex Crow wrote:
Thanks for this Elieze
Thanks for this Eliezer - however I can't rebuild the SRPM on latest CentOS:
configure: Authentication support enabled: yes
checking for ldap.h... (cached) no
checking winldap.h usability... no
checking winldap.h presence... no
checking for winldap.h... no
configure: error: Basic auth helper LDAP
I’ve had Squid running on Ubuntu for a few weeks. I’d configured the proxy
settings in the browsers. Everything has been working well and I've been
pleased with the results. But now I need to make this a transparent proxy
and I’m running into trouble & need some help.
I’ve got a Destination
/*
You could assign two workers, each with a different http_port and
ssl_crtd helper using different cert databases.
*/
How to do this? It sounds it might meet our need.
The reason is that we assign a port for internal,
so we can use cheap CA (self-generated CA), for the collaboration, we use
On Mon, Jun 29, 2015 at 9:35 PM, Amos Jeffries wrote:
> On 30/06/2015 8:54 a.m., Nick Rogers wrote:
> > Hello,
> >
> > I am experiencing an issue with squid 3.5.5 and FreeBSD 10.1 where
> > tcp_outgoing_address correctly rewrites the source address of outgoing
> > packets, but fails to bind the s
On 1/07/2015 1:07 a.m., HackXBack wrote:
> most app's in mobiles used pinned connection.
> how we can automatically bypass any pinned connection come to squid and
> none_bump it ?
> is there a way to make that become automatically ?
I assume you mean Squid definition of pinning (and not the Chrom
most app's in mobiles used pinned connection.
how we can automatically bypass any pinned connection come to squid and
none_bump it ?
is there a way to make that become automatically ?
Thanks.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/bypass-pinned-conne
Hi Amos
On Fri, Jun 19, 2015 at 12:06 PM, Amos Jeffries wrote:
> On 19/06/2015 5:23 a.m., Tom Tom wrote:
>> Hi
>>
>> gdb shows the following:
>>
>>
>
>> #3 0x7ff7ad7d31d2 in __GI___assert_fail (assertion=0x83314d "0",
>> file=0x8114cb "hash.cc", line=240,
>> function=0x842020 "void
>> h
On 30/06/2015 9:42 p.m., Stakres wrote:
> Amos,
> Yes, similar case here on the 4223.
> By reading the case 4223, we can see that part "Non-cacheable objects should
> never be added to the digest." from you.
> In my squid, there is no restriction, ICP is fully open, squid server
> (3.5.5) are compi
Hi,
Could the issue be related to that ?
TCP_MEM_HIT/200 224442 GET
http://squid1/8b26b519d740afd8ec698b6af06efd8e17c6e5b6:8182/squid-internal-periodic/store_digest
- HIER_NONE/- application/cache-digest
Is it normal to see the store-digest as a MEM_HIT ?
I say to the squid to do not reply with a
On 30/06/2015 9:30 p.m., masterx81 wrote:
> Hi...
> I'm trying to limit download bandwidth to some user groups based on AD using
> external helpers, using the following command:
> delay_pools 1
> delay_class 1 1
> delay_access 1 allow InternetLimitato InternetLibero InternetCentralino
> !CONNECT
>
Amos,
Yes, similar case here on the 4223.
By reading the case 4223, we can see that part "Non-cacheable objects should
never be added to the digest." from you.
In my squid, there is no restriction, ICP is fully open, squid server
(3.5.5) are compiled with the digest option, so all is done to allow
Hi...
I'm trying to limit download bandwidth to some user groups based on AD using
external helpers, using the following command:
delay_pools 1
delay_class 1 1
delay_access 1 allow InternetLimitato InternetLibero InternetCentralino
!CONNECT
delay_parameters 1 50/50
"InternetLimitato Intern
On 30/06/2015 8:55 p.m., Stakres wrote:
> Amos,
> We used this example from the wiki:
> http://wiki.squid-cache.org/Features/CacheHierarchy
> We can see a sibling/sibling archi is possible, right ?
Possible vs Useful. For you at present its possible, but not
particularly useful.
I've just receive
On 30/06/2015 8:33 p.m., Stakres wrote:
> Hi,
> I disabled the sibling on both squid servers, we got one 504:
> TCP_MISS/504 361 GET
> http://rate.msfsob.com/review?h=www.searchhomeremedy.com -
> HIER_DIRECT/8.25.35.129
> A wget on this url gives a 404, so here we can say the object does not
> exis
Amos,
We used this example from the wiki:
http://wiki.squid-cache.org/Features/CacheHierarchy
We can see a sibling/sibling archi is possible, right ?
Here, we can not have a "cache_peer parent" archi as the tproxy (original
user ip) will be lost at the parent level, you wrote this in a previous po
On 30/06/2015 8:21 p.m., Stakres wrote:
> Anthony, Amos,
>
> The 2 squid are kid/parent each of them (both sibling).
I'm being pedantic about this because the operations possible with a
parent proxy are very different from those done with a sibling proxy.
> So, when one aks the second, they play
Hi,
I disabled the sibling on both squid servers, we got one 504:
TCP_MISS/504 361 GET
http://rate.msfsob.com/review?h=www.searchhomeremedy.com -
HIER_DIRECT/8.25.35.129
A wget on this url gives a 404, so here we can say the object does not
exist, the TCP_MISS/504 seems a correct answer.
But no new
On 30/06/2015 7:57 p.m., HackXBack wrote:
> i copy from normal log to the assertion error\
> is this enough or you need more ?
Its leading me to a very horrible conclusion. This appears to be a
pinned connection with two transactions trying to read data out of the
server connection simultaneously.
Anthony, Amos,
The 2 squid are kid/parent each of them (both sibling).
So, when one aks the second, they play the role of kid -> parent, am I right
?
Here is the way:
Squid1 checks the Squid2 and gets that:
... user-ip TCP_MISS/504 708 GET
http://code.jquery.com/ui/1.10.3/jquery-ui.js - CD_SIBLIN
On 30/06/2015 7:45 p.m., Stakres wrote:
> Hi Antony,
>
> Correct, the kid contacts the parent that is getting a 504 and replies the
> same to the kid. That's why I suspect the parent tries to download by itself
> instead replying to the kid it does not have the object so the kid should do
> a fres
On Tuesday 30 Jun 2015 at 08:45, Stakres wrote:
> There are 2 squid, sibling each of them.
> Squid1 (10.1.1.1):
> cache_peer 10.1.1.2 sibling 8182 8183 proxy-only no-tproxy
> Squid2 (10.1.1.2):
> cache_peer 10.1.1.1 sibling 8182 8183 proxy-only no-tproxy
> if you need more details, feel free to a
i copy from normal log to the assertion error\
is this enough or you need more ?
Thanks Amos.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-178-fd-table-conn-fd-halfClosedReader-NULL-tp4670979p4671959.html
Sent from the Squid - User
2015/06/30 10:09:38.432 kid1| Acl.cc(138) matches: checking always_direct
2015/06/30 10:09:38.432 kid1| Acl.cc(138) matches: checking always_direct#1
2015/06/30 10:09:38.432 kid1| Acl.cc(138) matches: checking fakespeed
2015/06/30 10:09:38.432 kid1| RegexData.cc(51) match: aclRegexData::match:
chec
Hi Antony,
Correct, the kid contacts the parent that is getting a 504 and replies the
same to the kid. That's why I suspect the parent tries to download by itself
instead replying to the kid it does not have the object so the kid should do
a fresh download from internet.
examples:
TCP_MISS/504 70
On Tuesday 30 Jun 2015 at 08:24, Stakres wrote:
> Here, it seems the parent (sibling mode) tries to do the request itself but
> faces an error (504 gateway timeout), it should answer to the kid it does
> not have the object (TCP_MISS) then the parent should download the object
> from internet.
Su
Hi Amos,
Yep, i did not modify the TTL transaction.
Here, it seems the parent (sibling mode) tries to do the request itself but
faces an error (504 gateway timeout), it should answer to the kid it does
not have the object (TCP_MISS) then the parent should download the object
from internet.
With thi
37 matches
Mail list logo