[squid-users] Query about login=pass

2016-04-01 Thread Sreenath BH
Hi All,

We have a setup with two squid servers lets say, squid1 and squid2.
Requests land at Squid1 and it sends the request to squid2. Squid2
uses X-User-ID and Authorization headers for authenticating the user,
and on success, fetches data from another webserver and returns the
data. If authentication fails, it returns a 401 response.

What we have observed is that for some reason, squid does not send the
Authorization header to the upstream squid server. However, X-User-ID
header is always sent to upstre.

10.135.81.100 is squid2.

Here is configuration of squid1, where we see the problem.
--
acltest_upload   urlpath_regex   ^/upload
acltest_nms   urlpath_regex   ^/nms
acltrash_miscurlpath_regex   ^/trash

http_port 80 accel defaultsite=sitgateway.qiodrive.com vhost
https_port 443 cert=/etc/squid3/certificates/test.crt
key=/etc/squid3/certificates/qiodrivekey.key
cafile=/etc/squid3/certificates/gd_bundle-g2-g1.crt accel

cache_peer 10.135.81.100 parent 80 0 no-query login=PASS originserver name=name1
cache_peer_access name1 allow test_upload
cache_peer_access name1 deny all

cache_peer 10.135.81.100 parent 80 0 no-query login=PASS originserver name=name2
cache_peer_access name2 allow test_nms
cache_peer_access name2 deny all

cache_peer 10.135.81.100 parent 80 0 no-query originserver name=name3
cache_peer_access name3 allow trash_misc
cache_peer_access name3 deny all


As can be seen above, we have associated different names  (name1,
name2 and name3) for the same Squid2 server, all listening at same
port also. Is this a correct way of doing it? I ran squid -parse on
the squid.conf file and it did not report any problem.

1. Squid1 listens on both 80 and SSL port. If request comes on SSL
port, will it send Authorization token to Squid that is not SSL squid?

2. In the source code of squid (http.c) I see following lines in the function:

void
copyOneHeaderFromClientsideRequestToUpstreamRequest(const
HttpHeaderEntry *e, const String strConnection, const HttpRequest *
request, HttpHeader * hdr_out, const int we_do_ranges, const
HttpStateFlags &flags)

   case HDR_AUTHORIZATION:
/** \par WWW-Authorization:
 * Pass on WWW authentication */

if (!flags.originpeer) {
hdr_out->addEntry(e->clone());
} else {
/** \note In accelerators, only forward authentication if enabled
 * (see also httpFixupAuthentication for special cases)
 */
if (request->peer_login &&
(strcmp(request->peer_login, "PASS") == 0 ||
 strcmp(request->peer_login, "PASSTHRU") == 0 ||
 strcmp(request->peer_login, "PROXYPASS") == 0)) {
hdr_out->addEntry(e->clone());
}
}

break;


I don't understand what might prevent squid from sending the
Authorization header.

Any help appreciated,

thanks,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] authentication of every GET request from part of URL?

2015-11-06 Thread Sreenath BH
Hi
I am very new to Squid, and think have a strange requirement.
We want to serve cached content only if the client has been
authenticated before.
Since we don't expect the client software to send any information in
headers, we embed a token in the URL that we present to the user.

So when the client s/w uses this URL, we want to extract the token
from URL and do a small database query to ensure that the token is
valid.

This is in accelerator mode.
Is it possible to use something similar to basic_fake_auth and put my
code there that does some database query?
If the query fails, we don't return the cached content?

Basically what I am looking for is ability to execute some script for
every request.

Any tips greatly appreciated.

thanks,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Subject: Re: authentication of every GET request from part of URL?

2015-11-08 Thread Sreenath BH
Hi,
The application has already been designed and implemented and I have
moved to this project recently. Hence redesigning the application now
is unlikely.
Also, the video player applications (the ones we have) do not send
headers for authentication. They assume unauthenticated data is being
sent.

Is there a way for me to invoke some custom code for every request
that Squid receives? That script would do the following:

1. Extract part of the URL(the token) and look up in a database to see
if it is valid.
If valid, proceed to lookup cached object, other wise go to
back-end fetch, etc.
2. If the token is not found in database, return with an error, so
that Squid can send back a not found type (some HTTP error) of
response.

thanks,
Sreenath


On 7/11/2015 1:33 a.m., Sreenath BH wrote:
> Hi
> I am very new to Squid, and think have a strange requirement.
> We want to serve cached content only if the client has been
> authenticated before.
> Since we don't expect the client software to send any information in
> headers, we embed a token in the URL that we present to the user.
>

Um, you know how sending username and password in plain-text Basic auth
headers is supposed to be the worst form of security around?

It's not quite. Sending credentials in the URL is worse. Even if its
just an encoded token.

Why are you avoiding actual HTTP authentication?

Why be so actively hostile to every other cache in existence?


> So when the client s/w uses this URL, we want to extract the token
> from URL and do a small database query to ensure that the token is
> valid.
>
> This is in accelerator mode.
> Is it possible to use something similar to basic_fake_auth and put my
> code there that does some database query?

The "basic_..._auth" parts of that helpers name mean that it performs
HTTP Basic authentication.

The "fake" part means that it does not perform any kind of validation.

All of the text above has been describing how you want to perform
actions which are the direct opposite of everything basic_fake_auth does.

> If the query fails, we don't return the cached content?

What do you want to be delivered instead?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Subject: Re: authentication of every GET request from part of URL?

2015-11-09 Thread Sreenath BH
Hi Alex,

thanks for your detailed asnwers.

Here are more details.
1. If the URL does not have any token, we would like to send an error
message back to the browser/client, without doing a cache lookup, or
going to backend apache server.

2. If the token is invalid (that is we can't find it in a database),
that means we can not serve
data. In this case we would like to send back a HTTP error (something
like a  401 or 404, along with a more descriptive message)

3. If the token is valid(found), remove the token from the URL, and
use remaining part of URL as the key to look in Squid cache.

4. If found return that data, along with proper HTTP status code.
5. If cache lookup fails(not cached), send HTTP request to back-end
apache server (removing the token), get returned result, store in
cache, and return to client/browser.

I read about ACL helper programs, and it appears I can do arbitrary
validations in it, so should work.
Is it correct to assume that the external ACL code runs before url rewriting?,

Does the URL rewriter run before a cache lookup?

thanks,
Sreenath

On 11/8/15, Alex Rousskov  wrote:
> On 11/08/2015 06:34 AM, Sreenath BH wrote:
>
>> Is there a way for me to invoke some custom code for every request
>> that Squid receives?
>
> Yes, there are several interfaces, including a built-in ACL, an external
> ACL helper, a URL rewriter, an eCAP/ICAP service. Roughly speaking, the
> former ones are easier to use and the latter ones are more powerful.
>
>
>> That script would do the following:
>>
>> 1. Extract part of the URL(the token) and look up in a database to see
>> if it is valid.
>> If valid, proceed to lookup cached object, other wise go to
>> back-end fetch, etc.
>> 2. If the token is not found in database, return with an error, so
>> that Squid can send back a not found type (some HTTP error) of
>> response.
>
> If the above are your requirements, avoid the word "authentication"
> might help. It confuses people into thinking you want something far more
> complex.
>
>
> The validation in step #1 can be done by an external ACL. However, you
> probably forgot to mention that the found token should be removed from
> the URL. To edit the URL, you need to use a URL rewriter or an eCAP/ICAP
> service.
>
> Everything else can be done by built-in ACLs unless you need to serve
> very custom error messages. In the latter case, you will need an eCAP or
> ICAP service.
>
> However, if "go to back-end fetch" means loading response from some
> storage external to Squid without using HTTP, then you need an eCAP or
> ICAP service to do that fetching.
>
> I recommend that you clarify these parts of your specs:
>
> What do you want to do when the token is not found in the URL?
>
> What do you want to do when an invalid token is found in the URL?
>
> Will sending a response using a simple template filled with some basic
> request details suffice when a valid token is not found in the database?
>
>
> HTH,
>
> Alex.
>
>
>
>> On 7/11/2015 1:33 a.m., Sreenath BH wrote:
>>> Hi
>>> I am very new to Squid, and think have a strange requirement.
>>> We want to serve cached content only if the client has been
>>> authenticated before.
>>> Since we don't expect the client software to send any information in
>>> headers, we embed a token in the URL that we present to the user.
>>>
>>
>> Um, you know how sending username and password in plain-text Basic auth
>> headers is supposed to be the worst form of security around?
>>
>> It's not quite. Sending credentials in the URL is worse. Even if its
>> just an encoded token.
>>
>> Why are you avoiding actual HTTP authentication?
>>
>> Why be so actively hostile to every other cache in existence?
>>
>>
>>> So when the client s/w uses this URL, we want to extract the token
>>> from URL and do a small database query to ensure that the token is
>>> valid.
>>>
>>> This is in accelerator mode.
>>> Is it possible to use something similar to basic_fake_auth and put my
>>> code there that does some database query?
>>
>> The "basic_..._auth" parts of that helpers name mean that it performs
>> HTTP Basic authentication.
>>
>> The "fake" part means that it does not perform any kind of validation.
>>
>> All of the text above has been describing how you want to perform
>> actions which are the direct opposite of everything basic_fake_auth does.
>>
>>> If the query fails, we don't return the cached content?
>>
>> What do you want to be delivered instead?
>>
>> Amos
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Subject: Re: authentication of every GET request from part of URL?

2015-11-11 Thread Sreenath BH
Hi,

Thanks to everyone who have responded in such detail.

I have done a proof of concept of the solution using external ACL
helper and URL rewriter, and it does what I wanted.

Regarding using a token in URL as a way to differentiate between
different users, I now understand the implications on downstream
caches and overall performance. Thanks for driving home the important
point.

regards,
Sreenath


On 11/9/15, Amos Jeffries  wrote:
> On 10/11/2015 6:12 a.m., Sreenath BH wrote:
>> Hi Alex,
>>
>> thanks for your detailed asnwers.
>>
>> Here are more details.
>> 1. If the URL does not have any token, we would like to send an error
>> message back to the browser/client, without doing a cache lookup, or
>> going to backend apache server.
>>
>> 2. If the token is invalid (that is we can't find it in a database),
>> that means we can not serve
>> data. In this case we would like to send back a HTTP error (something
>> like a  401 or 404, along with a more descriptive message)
>>
>
> All of the above is external_acl_type helper operations.
>
>> 3. If the token is valid(found), remove the token from the URL, and
>> use remaining part of URL as the key to look in Squid cache.
>>
>> 4. If found return that data, along with proper HTTP status code.
>
> The above is url_rewrite_program operations.
>
>> 5. If cache lookup fails(not cached), send HTTP request to back-end
>> apache server (removing the token), get returned result, store in
>> cache, and return to client/browser.
>
> And that part is normal caching. Squid will do it by default.
>
> Except the "removing the token" part. Which was done at step #4 already,
> so has no relevance here at step #5.
>
>>
>> I read about ACL helper programs, and it appears I can do arbitrary
>> validations in it, so should work.
>> Is it correct to assume that the external ACL code runs before url
>> rewriting?,
>
> The http_access tests are run before re-writing.
> If the external ACL is one of those http_access tests the answer is yes.
>
>>
>> Does the URL rewriter run before a cache lookup?
>
> Yes.
>
>
>
> Although, please note that despite this workaround for your cache. It
> really is *only* your proxy which will work nicely. Every other cache on
> the planet will see your applications URLs are being unique and needing
> different caching slots.
>
> This not only wastes cache space for them, but also forces them to pass
> extra traffic in the form of full-object fetches at your proxy. Which
> raises the bandwidth costs for both them and you far beyond what proper
> header based authentication or authorization would.
>
> As the other sysadmin around the world notice this unnecessarily raised
> cost they will start to hack their configs to force-cache the responses
> from your application. Which will bypass your protection system entirely
> since your proxy may not not even see many of the requests.
>
> The earlier you can get the application re-design underway to remove the
> credentials token from URL, the earlier the external problems and costs
> will start to dsappear.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] routing to parent using carp

2015-11-24 Thread Sreenath BH
Hi all,

We are planning to use carp to route requests based on request URL.
A part of the URL refers to a part of the file that is being requested
in the GET request(say a part of a video file)

However, to make the back-end more efficient, it would be great if all
requests for a particular file  went to same parent server.

Is there a way in Squid to make it use a part of the URL when it
calculates the hash to map the URL to a parent?

thanks for any tips,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] routing to parent using carp

2015-11-24 Thread Sreenath BH
Thanks.

I should have read the documentation completely before posting.

carp-key=key-specification

rgds,
Sreenath


On 11/24/15, Amos Jeffries  wrote:
> On 24/11/2015 11:11 p.m., Sreenath BH wrote:
>> Hi all,
>>
>> We are planning to use carp to route requests based on request URL.
>> A part of the URL refers to a part of the file that is being requested
>> in the GET request(say a part of a video file)
>>
>> However, to make the back-end more efficient, it would be great if all
>> requests for a particular file  went to same parent server.
>>
>> Is there a way in Squid to make it use a part of the URL when it
>> calculates the hash to map the URL to a parent?
>
> See the documentation on CARP options:
> <http://master.squid-cache.org/Doc/config/cache_peer/>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Time for cache synchronization between siblings

2015-12-15 Thread Sreenath BH
Hi,

I have a setup with three squid peers (siblings in squid.conf) and
three upstream servers(peers with parent and originserver in
squid.conf).

I am using htcp for the three squid siblings.
How much time does it take for one squid server to 'know' that another
peer has a particular object cached? I see digests exchanged between
the siblings, as logged in cache.log.

I have been able to make a request to one sibling and it resulted in a
sibling_hit.

How I do this test is this:
1. bring up all siblings
2. issue a request to one server (sibling 1)
3. Make sure it is cached in sibling 1
4. Wait for some time (I don't know how long to wait)
5. Make same request to another sibling, say sibling 2
6. Check if it went to upstream server for the request or it was a sibling hit.

My problem is that the sibling hits seem to be random. I am  not able
to figure out exactly
how log it takes for the cache information to propagate to all siblings.

Any information in this regard is appreciated.

thanks,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Time for cache synchronization between siblings

2015-12-16 Thread Sreenath BH
Hi,

Thanks for the tips. After disabling digest I believe performance improved.
However, I found that randomly requests were being routed to parent
even when siblings had the data cached.

From access.log I found TIMEOUT_CARP. I assumed this meant HTCP timed
out and squid was forced to go to fetch the data. So I increased
icp_query_timeout to 4000 milliseconds, and the hit rate increased
further.

But I still find that sometimes, even after getting a HIT response
from a sibling, squid, for some reason still decides to go to the
parent for requested object.

Are there any other reasons why squid will decide to go to parent servers?

And another question: When the hash key is computed for storing cache
objects, does Squid use the hostname(or IP address) also as part of
URL, or just the part that appears after the hostname/IP:port numbers?

For example: if ip address is squid servers is 10.135.85.2 and
10.135.85.3, and a request made to 1st server would have had the IP
address as part of the URL. However, next time same request is made to
server2, a different IP address would be used. Does this affect cache
hit at the sibling server?

I think it should not, but is this the case?

We will have a load balancer that sends requests to each squid server,
and we want cache peering to work correctly in this case.

thanks,
Sreenath


On 12/16/15, Amos Jeffries  wrote:
> On 16/12/2015 7:16 a.m., Sreenath BH wrote:
>> Hi,
>>
>> I have a setup with three squid peers (siblings in squid.conf) and
>> three upstream servers(peers with parent and originserver in
>> squid.conf).
>>
>> I am using htcp for the three squid siblings.
>> How much time does it take for one squid server to 'know' that another
>> peer has a particular object cached? I see digests exchanged between
>> the siblings, as logged in cache.log.
>
> When both HTCP an dDgests are active between siblings the maximum time
> is however long it takes for the HTCP packet to reach the sibling, be
> parsed, looked up in the cache and response to get back.
>
> Digests are used to short-circuit the ICP or HTCP process. If the digest
> contains an entry for the URL the peer will be selected as a possible
> destination server. Regardless of whether the object stored for that URL
> is the same one the client is fetching.
>
> Digests are updated every digest_rebuild_period (default 1 hr). You can
> disable digests with either "digest_generation off" or per-peer with the
> cache_peer no-digest option.
>
>
>>
>> I have been able to make a request to one sibling and it resulted in a
>> sibling_hit.
>>
>> How I do this test is this:
>> 1. bring up all siblings
>> 2. issue a request to one server (sibling 1)
>> 3. Make sure it is cached in sibling 1
>> 4. Wait for some time (I don't know how long to wait)
>
> Until the log of sibling1 contains a digest fetch from sibling2. A
> restart of sibling2 will make that happen faster.
>
>> 5. Make same request to another sibling, say sibling 2
>> 6. Check if it went to upstream server for the request or it was a sibling
>> hit.
>>
>> My problem is that the sibling hits seem to be random. I am  not able
>> to figure out exactly
>> how log it takes for the cache information to propagate to all siblings.
>
> Digest is a old algorithm designed as an optimization of ICP, and
> likewise is based on URL alone - which is great for HTTP/1.0 traffic. In
> modern HTTP/1.1 traffic the Vary headers have a big part to play and
> HTCP with full-header lookups works much better.
>
> I suggest trying with only HTCP (digests disabled) and see if your
> performance improves at all. YMMV though.
>
> Be aware that there is no guarantee that any object is still in cache,
> even with the more reliable HTCP on-demand lookups. Any object could be
> dropped from sibling1 cache picoseconds after the "i have it" reply
> started being formed for delivery to sibling2 (before it even hits the
> wire on its way back).
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Time for cache synchronization between siblings

2015-12-17 Thread Sreenath BH
Hi,

Thanks for the detailed response. I really appreciate it.

Unfortunately the load balancer we use is not a squid load balancer
and for now I will have to use HTCP.

Please take a look at the following lines from access.log of one of
the three squid servers.

1450351827.534  0 10.135.83.129 UDP_HIT/000 0 HTCP_TST
http://127.0.0.1:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?size=xs&start=0.000&end=5.930&;
- HIER_NONE/- -

1450351827.562 20 10.135.83.129 TCP_HIT/200 553852 GET
http://127.0.0.1:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?
- HIER_NONE/- video/mp2t

1450352028.731  0 10.135.83.128 UDP_MISS/000 0 HTCP_TST
http://10.135.83.128:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?size=xs&start=0.000&end=5.930&;
- HIER_NONE/- -


The first line indicates a hit when queried by a peer. Note that the
IP address is 127.0.0.1.
It was a UDP HIT and it was followed by the actual request for the
cached object, which succeeded.

Now the third line indicates UDP query for same object, except that
URL has a different IP address, and the log says it was a MISS.

I don't know what I am doing wrong, but it consistently seems to treat
the IP address as part of the URL for purpose of HIT/MISS decision.

If all requests were made from a local client(say using curl running
locally on the machine) and using 127.0.0.1 as IP address, HTCP works
correctly.

Even without HTCP, just issuing same request from localhost and
another machine using a the externally visible IP address, squid does
not appear to use cached object. I am new to HTTP and think I must be
doing something wrong, but cant say what.

I wonder if ICP would have fared better since it uses just the URL.
Might that be a reason?

thanks,
Sreenath


On 12/17/15, Amos Jeffries  wrote:
> On 17/12/2015 3:10 a.m., Sreenath BH wrote:
>> Hi,
>>
>> Thanks for the tips. After disabling digest I believe performance
>> improved.
>> However, I found that randomly requests were being routed to parent
>> even when siblings had the data cached.
>>
>> From access.log I found TIMEOUT_CARP. I assumed this meant HTCP timed
>> out and squid was forced to go to fetch the data. So I increased
>> icp_query_timeout to 4000 milliseconds, and the hit rate increased
>> further.
>>
>> But I still find that sometimes, even after getting a HIT response
>> from a sibling, squid, for some reason still decides to go to the
>> parent for requested object.
>>
>> Are there any other reasons why squid will decide to go to parent
>> servers?
>
> Just quirks of timing I think. Squid tracks response latency and prefers
> the fastest source. If the parent is responding faster than the sibling
> for man requests over a short period then Squid might switch to using
> the parent as first choice for a
>
>
> Some traffic is also classified as "non-hierarchical". Meaning that it
> makes no sense sending it to a sibling unless all parents are down.
> Things such as CONNECT, OPTIONS, POST etc where the response is not
> possible to be cached at the sibling.
>
>
>>
>> And another question: When the hash key is computed for storing cache
>> objects, does Squid use the hostname(or IP address) also as part of
>> URL, or just the part that appears after the hostname/IP:port numbers?
>
> No. The primary Store ID/key is the absolute URL alone. Unless you are
> using the Store-ID feature of Squid to change it to some other explicit
> string value.
>
> If the URL produces a reply object with Vary header, then the expansion
> of the Vary header format is appended to the primary Store ID/key.
>
>>
>> For example: if ip address is squid servers is 10.135.85.2 and
>> 10.135.85.3, and a request made to 1st server would have had the IP
>> address as part of the URL. However, next time same request is made to
>> server2, a different IP address would be used. Does this affect cache
>> hit at the sibling server?
>>
>> I think it should not, but is this the case?
>
> Correct the Squid IP has nothing to do with the cache storage.
>
>>
>> We will have a load balancer that sends requests to each squid server,
>> and we want cache peering to work correctly in this case.
>
> FYI; the digest and HTCP algorithms you are dealing with are already
> load balancing algorithms. They are just designed for use in a flat
> 1-layer heirarchy.
>
> If you intend to have a 2-layer heirarchy (frontend LB and backend
> caches) I suggest you might want to look into Squid as the frontend LB
> using CARP algorithm. The CARP algorithm ensures deterministic storage
> locations for

Re: [squid-users] Time for cache synchronization between siblings

2015-12-18 Thread Sreenath BH
Hi Amos,

It was definitely ignorance of the tools on my part. I am using curl
for testing my setup.
I was using different URLs (different host/IP address as part of URL)
when issuing request to to Squid. That caused the problem I observed.

I read about Curl tool and found out that I can set Host header.
So when I set Host header to same value in all curl commands, cache
hits are happening as I was expecting it to. Access.log shows clearly
that the URLs are same.

So I can say that squid has solved our problem nicely.

A few words about our setup. Squid is used as a reverse caching proxy.

Our application serves video fragments. We use HLS (HTTP Live
streaming) where a given video is broken up into small fragments and
served on demand. Since transcoding to different formats and bit rates
is CPU intensive, we want to cache frequently accessed video
fragments.

Also, we use CARP to make sure that  requests for all fragments of a
given video file go to same backend webserver, because it would not
have do download the single large video file from backend server.

Thanks to this mailing list I have been able to successfully use squid.

rgds,
Sreenath


On 12/18/15, Amos Jeffries  wrote:
> On 18/12/2015 1:21 a.m., Sreenath BH wrote:
>> Hi,
>>
>> Thanks for the detailed response. I really appreciate it.
>>
>> Unfortunately the load balancer we use is not a squid load balancer
>> and for now I will have to use HTCP.
>>
>> Please take a look at the following lines from access.log of one of
>> the three squid servers.
>> 
>> 1450351827.534  0 10.135.83.129 UDP_HIT/000 0 HTCP_TST
>> http://127.0.0.1:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?size=xs&start=0.000&end=5.930&;
>> - HIER_NONE/- -
>>
>> 1450351827.562 20 10.135.83.129 TCP_HIT/200 553852 GET
>> http://127.0.0.1:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?
>> - HIER_NONE/- video/mp2t
>>
>> 1450352028.731  0 10.135.83.128 UDP_MISS/000 0 HTCP_TST
>> http://10.135.83.128:3128/media/stream/video/NDo3ZDY1OTRjOS02NjM4LTQyNDMtOGMyNi0zYTc3YmI1MzI3ZjAubXA0?size=xs&start=0.000&end=5.930&;
>> - HIER_NONE/- -
>> 
>>
>> The first line indicates a hit when queried by a peer. Note that the
>> IP address is 127.0.0.1.
>> It was a UDP HIT and it was followed by the actual request for the
>> cached object, which succeeded.
>>
>> Now the third line indicates UDP query for same object, except that
>> URL has a different IP address, and the log says it was a MISS.
>>
>> I don't know what I am doing wrong, but it consistently seems to treat
>> the IP address as part of the URL for purpose of HIT/MISS decision.
>
>
> Notice how the normal HTTP request (TCP_* line) is also using
> "127.0.0.1" for origin server name.
>
> This means the two HTCP requests really are for two very different URLs.
> Completely different origin servers being contacted.
>
>
>>
>> If all requests were made from a local client(say using curl running
>> locally on the machine) and using 127.0.0.1 as IP address, HTCP works
>> correctly.
>
> What is the output of "squid -v" ?
>
> And beyond being a layer of caches behind a LB. What is the purpose of
> this installation;
>  reverse-proxy / CDN ?
>  ISP forward/explicit proxy farm?
>  Intranet gateway?
>  some mix of the above?
>
> and what is your full squid.conf ? (without comments and empty lines of
> course).
>
>
> What exactly are the clients (curl) requesting?
>  from the Squid siblings directly? or through the LB?
>
>
>>
>> Even without HTCP, just issuing same request from localhost and
>> another machine using a the externally visible IP address, squid does
>> not appear to use cached object. I am new to HTTP and think I must be
>> doing something wrong, but cant say what.
>
> Huh? Those log lines you posted above contradict. The first HTCP said
> HIT and the TCP object fetch was served from the cache. The second said
> MISS on the other URL, so no TCP fetch.
>
> The sibling lookup and HTCP appears to be working perfectly correct.
>
>>
>> I wonder if ICP would have fared better since it uses just the URL.
>> Might that be a reason?
>
> No. ICP always fares worse. It says UDP_HIT in a lot of cases where the
> URL is same but the followup TCP fetch discovers HTTP mime headers
> negotiating some variant object not in the sibling cache. So UDP_HIT
> followed by TCP_MISS.
>  That is almost the worst-case scenario: it causes a minimum of 2x proxy
> processing latency delays on the whole transaction from the client
>

[squid-users] dynamic messages from acl helper program

2016-01-19 Thread Sreenath BH
Hi All,

We are using acl helper to authenticate users. Squid allows a template
file that will be used to send a custom error message when the ACL
sends an "ERR" string back to squid.

In our case the acl helper contacts another web service for authentication.
Is there a way to send the message we get from the web service (or any
thing that changes from request to request) back to the client.

Essentially what we are looking for is a way to change the error
message at run time.

thanks for any help,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] key-value pairs output from external helper

2016-01-20 Thread Sreenath BH
Hi,

Squid allows external acl helpers to write arbitrary key-value pairs
in its output.
As per documentation, these values can be set in both ERR and OK cases.

Are these available for use by other modules of Squid?

Specifically, can these be accessed by URL rewriter helper. We would
like to rewrite the URL by using some of the key-value pairs set by
external ACL helper.

See following:
---
clt_conn_tag=TAG
Associates a TAG with the client TCP connection.
The TAG is treated as a regular annotation but persists across
future requests on the client connection rather than just the
current request. A helper may update the TAG during subsequent
requests be returning a new kv-pair.
-
If we set clt_conn_tag to some string in external ACL helper, can this
be picked up by the external url rewriter?

Ours is a reverse caching setup.

thanks,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] key-value pairs output from external helper

2016-01-22 Thread Sreenath BH
Hi

I added code to set "tag" in acl helper, as follows:

print "OK tag=abcd\n"
and I was able to add a  url_rewrite_extras
as follows:

url_rewrite_extras "%et" and it was passed to the url rewriter.

So, I guess it is working.

thanks for the help
Sreenath

On 1/20/16, Alex Rousskov  wrote:
> On 01/20/2016 09:33 AM, Sreenath BH wrote:
>
>> Squid allows external acl helpers to write arbitrary key-value pairs
>> in its output.
>
>> Are these available for use by other modules of Squid?
>
> The answer depends on the helper: eCAP services and many helpers support
> admin-configurable metadata exchanges. Search squid.conf.documented for
> "_extras" and "adaptation_meta".
>
>
>> Specifically, can these be accessed by URL rewriter helper.
>
> Yes. See url_rewrite_extras.
>
>
>> If we set clt_conn_tag to some string in external ACL helper, can this
>> be picked up by the external url rewriter?
>
> Yes, that should work in Squid v3.5 or later. If it does not, it is
> probably a bug.
>
>
> HTH,
>
> Alex.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] behavior of external acl helper in Squid 3.5.13

2016-01-22 Thread Sreenath BH
Hi

I am using an external helper for authentication. I have just one
http_access in squid.conf that refers to this external helper.

I also have a url rewriter to which I pass some information using "tag" key.
I observed that the acl is not invoked in several cases, just calling
the url rewriter.

Squid sometimes seems to skip acl phase and directly proceeds to url rewriter.

Are there cases when squid proceedss without performing external acl?
Please see log lines below:

--
2016/01/22 14:46:52.091 kid1| 23,3| url.cc(357) urlParse: urlParse:
Split URL 'http://localhost:3000/file/download?key=XXXYYY' into
proto='http', host='localhost', port='3000',
path='/file/download?key=XXXYYY'
2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1
2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1309) helperDispatch:
helperDispatch: Request sent to jio_helper #Hlpr4, 26 bytes
2016/01/22 14:46:52.091 kid1| 84,9| helper.cc(386) helperSubmit:
buf[26]=/file/download?key=XXXYYY

2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(866) helperHandleRead:
helperHandleRead: 18 bytes from jio_helper #Hlpr4
2016/01/22 14:46:52.091 kid1| 84,9| helper.cc(875) helperHandleRead:
accumulated[18]=OK tag=something4

2016/01/22 14:46:52.091 kid1| 84,3| helper.cc(892) helperHandleRead:
helperHandleRead: end of reply found
2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(29) parse: Parsing helper buffer
2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(48) parse: Buff length is
larger than 2
2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1
2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1309) helperDispatch:
helperDispatch: Request sent to redirector #Hlpr2, 58 bytes
2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(386) helperSubmit:
buf[58]=http://localhost:3000/file/download?key=XXXYYY something4

2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1
*** http://localhost:3000/file/download?key=XXXYYY something4
2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(866) helperHandleRead:
helperHandleRead: 28 bytes from redirector #Hlpr2
2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(875) helperHandleRead:
accumulated[28]=OK rewrite-url="something4"

2016/01/22 14:46:52.092 kid1| 84,3| helper.cc(892) helperHandleRead:
helperHandleRead: end of reply found
2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(29) parse: Parsing helper buffer
2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(48) parse: Buff length is
larger than 2
2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1
2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1309) helperDispatch:
helperDispatch: Request sent to redirector #Hlpr2, 58 bytes
2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(386) helperSubmit:
buf[58]=http://localhost:3000/file/download?key=XXXYYY something4

2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1

2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(866) helperHandleRead:
helperHandleRead: 28 bytes from redirector #Hlpr2
2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(875) helperHandleRead:
accumulated[28]=OK rewrite-url="something4"

2016/01/22 14:46:52.092 kid1| 84,3| helper.cc(892) helperHandleRead:
helperHandleRead: end of reply found
2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(29) parse: Parsing helper buffer
2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(48) parse: Buff length is
larger than 2
2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
2016/01/22 14:46:52.092 kid1| ERROR: URL-rewrite produces invalid
request: GET something4 HTTP/1.1
2016/01/22 14:46:52.092 kid1| 11,5| HttpRequest.cc(474) detailError:
current error details: 6/0
2016/01/22 14:46:52.092 kid1| 11,2| client_side.cc(1391)
sendStartOfMessage: HTTP Client local=[::1]:3000 remote=[::1]:35075 FD
9 flags=1
2016/01/22 14:46:52.092 kid1| 11,2| client_side.cc(1392)
sendStartOfMessage: HTTP Client REPLY:
-
HTTP/1.1 500 Internal Server Error^M
Server: squid/3.5.13^M
Mime-Version: 1.0^M
Date: Fri, 22 Jan 2016 14:46:52 GMT^M
Content-Type: text/html;charset=utf-8^M
Content-Length: 3889^M
X-Squid-Error: ERR_CANNOT_FORWARD 0^M
Vary: Accept-Language^M
Content-Language: en^M
X-Cache: MISS from TEJ-DL-CS-SERVER04^M
Via: 1.1 TEJ-DL-CS-SERVER04 (squid/3.5.13)^M
Connection: keep-alive^M
^M

--
2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
GetFirstAvailable: Running servers 1
2016/01/22 14:47:13.103 kid1| 11,2| client_side.cc(2345)
parseHttpRequest: HTTP Client local=[::1]:3000 remote=[::1]:35076 FD 9
flags=1
2016/01/22 14:47:13.103 kid1| 11,2| client_side.cc(2346)
parseHttpRequest: HTTP Client REQUEST:
-
GET /file/download?key=XXXYYY HTTP/1.1^

Re: [squid-users] behavior of external acl helper in Squid 3.5.13

2016-01-22 Thread Sreenath BH
Hi All,

before posting I should have read documentation completely.

I set both ttl and negative_ttl to zero, and it is working fine.

thanks,
Sreenath


On 1/22/16, Sreenath BH  wrote:
> Hi
>
> I am using an external helper for authentication. I have just one
> http_access in squid.conf that refers to this external helper.
>
> I also have a url rewriter to which I pass some information using "tag"
> key.
> I observed that the acl is not invoked in several cases, just calling
> the url rewriter.
>
> Squid sometimes seems to skip acl phase and directly proceeds to url
> rewriter.
>
> Are there cases when squid proceedss without performing external acl?
> Please see log lines below:
>
> --
> 2016/01/22 14:46:52.091 kid1| 23,3| url.cc(357) urlParse: urlParse:
> Split URL 'http://localhost:3000/file/download?key=XXXYYY' into
> proto='http', host='localhost', port='3000',
> path='/file/download?key=XXXYYY'
> 2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
> GetFirstAvailable: Running servers 1
> 2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1309) helperDispatch:
> helperDispatch: Request sent to jio_helper #Hlpr4, 26 bytes
> 2016/01/22 14:46:52.091 kid1| 84,9| helper.cc(386) helperSubmit:
> buf[26]=/file/download?key=XXXYYY
>
> 2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(866) helperHandleRead:
> helperHandleRead: 18 bytes from jio_helper #Hlpr4
> 2016/01/22 14:46:52.091 kid1| 84,9| helper.cc(875) helperHandleRead:
> accumulated[18]=OK tag=something4
>
> 2016/01/22 14:46:52.091 kid1| 84,3| helper.cc(892) helperHandleRead:
> helperHandleRead: end of reply found
> 2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(29) parse: Parsing helper
> buffer
> 2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(48) parse: Buff length is
> larger than 2
> 2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
> 2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
> GetFirstAvailable: Running servers 1
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1309) helperDispatch:
> helperDispatch: Request sent to redirector #Hlpr2, 58 bytes
> 2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(386) helperSubmit:
> buf[58]=http://localhost:3000/file/download?key=XXXYYY something4
>
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
> GetFirstAvailable: Running servers 1
> *** http://localhost:3000/file/download?key=XXXYYY something4
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(866) helperHandleRead:
> helperHandleRead: 28 bytes from redirector #Hlpr2
> 2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(875) helperHandleRead:
> accumulated[28]=OK rewrite-url="something4"
>
> 2016/01/22 14:46:52.092 kid1| 84,3| helper.cc(892) helperHandleRead:
> helperHandleRead: end of reply found
> 2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(29) parse: Parsing helper
> buffer
> 2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(48) parse: Buff length is
> larger than 2
> 2016/01/22 14:46:52.091 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
> 2016/01/22 14:46:52.091 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
> GetFirstAvailable: Running servers 1
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1309) helperDispatch:
> helperDispatch: Request sent to redirector #Hlpr2, 58 bytes
> 2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(386) helperSubmit:
> buf[58]=http://localhost:3000/file/download?key=XXXYYY something4
>
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(1167) GetFirstAvailable:
> GetFirstAvailable: Running servers 1
>
> 2016/01/22 14:46:52.092 kid1| 84,5| helper.cc(866) helperHandleRead:
> helperHandleRead: 28 bytes from redirector #Hlpr2
> 2016/01/22 14:46:52.092 kid1| 84,9| helper.cc(875) helperHandleRead:
> accumulated[28]=OK rewrite-url="something4"
>
> 2016/01/22 14:46:52.092 kid1| 84,3| helper.cc(892) helperHandleRead:
> helperHandleRead: end of reply found
> 2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(29) parse: Parsing helper
> buffer
> 2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(48) parse: Buff length is
> larger than 2
> 2016/01/22 14:46:52.092 kid1| 84,3| Reply.cc(52) parse: helper Result = OK
> 2016/01/22 14:46:52.092 kid1| ERROR: URL-rewrite produces invalid
> request: GET something4 HTTP/1.1
> 2016/01/22 14:46:52.092 kid1| 11,5| HttpRequest.cc(474) detailError:
> current error details: 6/0
> 2016/01/22 14:46:52.092 kid1| 11,2| client_side.cc(1391)
> sendStartOfMessage: HTTP Client local=[::1]:3000 remote=[::1]:35075 FD
> 9 flags=1
> 2016/01/22 14:46:52.092 kid1| 11,2| client_side.cc(1392)
> sendStartOfMessage: HTTP Client REPLY:
> -
> HTTP/1.1 500 Internal Server Error^M

[squid-users] external acl helpers working with deny_info

2016-01-24 Thread Sreenath BH
Hi All,

I am trying to validate my understanding of external acl, deny_info
and http_access deny all" interaction.

My squid conf has just two rules. First is external ACL helper and
then the "deny all" as follows:

Case (1)
---
external_acl_type my_helper ttl=0 negative_ttl=0 children-max=2 %PATH
/usr/local/bin/acl
acl AclName external my_helper
deny_info 404:ERR_MY_ACL  AclName
http_access allow AclName

http_access deny all


I want a default error code of 404 to be returned, along with a custom
error message file being sent.
My observations are as follows:

1. If my external ACL prints OK, it proceeds with processing.
2. If it prints ERR, instead of using the custom message, it proceeds
to next access rule, which is "http_access deny all"

When that fails it prints a default 403 message.

If I remove "deny all" line it works well.

Case (2)
I tried changing "http_access  allow" to "http_access deny" follows:


external_acl_type my_helper ttl=0 negative_ttl=0 children-max=2 %PATH
/usr/local/bin/acl
acl AclName external my_helper
deny_info 404:ERR_MY_ACLAclName
http_access deny !AclName

http_access deny all
--

In this case, whenever the acl helpers send "ERR", it prints the
correct error message.
But now, if it succeeds (prints OK), it goes to next line and fails
there, instead of proceeding with further processing.

Even in this case, removing the next "deny all"  will work correctly.

I find is strange that even when external ACL Helper matches and
prints OK, because of the way
the http_access line worded, it does not take it as a pass and goes to
check next http_access line.

Is this expected behavior? Or am I missing something?

thanks,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] external acl helpers working with deny_info

2016-01-25 Thread Sreenath BH
Hi Amos,

Thanks for detailed explanation.

For the case #1 in my original post, is it a bug that will get fixed some time?

I was able to get the behavior I want by adding a dummy ACL as follows
(after the external ACL line):

acl myacl src all
deny_info ERR_X myacl
http_access deny myacl

http_access deny all

myall is same as all, but now even after retaining "http_access deny
all", it works correctly.
With the above, even the "message" that was set in the  external acl
helper was also properly used in the error page.

I am just not sure it is the right way to do it.

Thanks,
Sreenath


On 1/25/16, Amos Jeffries  wrote:
> On 25/01/2016 5:18 a.m., Sreenath BH wrote:
>> Hi All,
>>
>> I am trying to validate my understanding of external acl, deny_info
>> and http_access deny all" interaction.
>>
>> My squid conf has just two rules. First is external ACL helper and
>> then the "deny all" as follows:
>>
>> Case (1)
>> ---
>> external_acl_type my_helper ttl=0 negative_ttl=0 children-max=2 %PATH
>> /usr/local/bin/acl
>> acl AclName external my_helper
>> deny_info 404:ERR_MY_ACL  AclName
>> http_access allow AclName
>>
>> http_access deny all
>> 
>>
>> I want a default error code of 404 to be returned, along with a custom
>> error message file being sent.
>> My observations are as follows:
>>
>> 1. If my external ACL prints OK, it proceeds with processing.
>> 2. If it prints ERR, instead of using the custom message, it proceeds
>> to next access rule, which is "http_access deny all"
>>
>> When that fails it prints a default 403 message.
>>
>> If I remove "deny all" line it works well.
>
> That is a bug. It should act the same as if the deny all was still there.
>
>
>>
>> Case (2)
>> I tried changing "http_access  allow" to "http_access deny" follows:
>>
>> 
>> external_acl_type my_helper ttl=0 negative_ttl=0 children-max=2 %PATH
>> /usr/local/bin/acl
>> acl AclName external my_helper
>> deny_info 404:ERR_MY_ACLAclName
>> http_access deny !AclName
>>
>> http_access deny all
>> --
>>
>> In this case, whenever the acl helpers send "ERR", it prints the
>> correct error message.
>> But now, if it succeeds (prints OK), it goes to next line and fails
>> there, instead of proceeding with further processing.
>>
>> Even in this case, removing the next "deny all"  will work correctly.
>>
>> I find is strange that even when external ACL Helper matches and
>> prints OK, because of the way
>> the http_access line worded, it does not take it as a pass and goes to
>> check next http_access line.
>
> You seem to be confusing the OK/ERR helepr protocol codes with HTTP
> pass/reject actions.
>
> * OK is not a "pass" it is a "match"
>
> * the "!" means inversion of the match/mismatch value
>
> So the !AclName means ERR is now a match and OK is a non-match.
>
>
> When the !AclName is a match the request is denied as per your rule and
> using the deny_info details in the rejection message.
>
> When the !AclName is a mis-match it skips and the "deny all" line denies
> the request.
>
> When you remove the "deny all" line the default action for this case #2
> becomes "allow all".
>
>>
>> Is this expected behavior? Or am I missing something?
>
>
> deny_info is the directive tying some specific output to an ACL name.
> Which is to be sent if (and only if) that ACL was used on a "deny" line.
>
> The bug in case #1 is that the last tested ACL is considered to be the
> reason for denial and its action performed when a deny happens. But
> without that explicit "deny all" the last tested was actually your ACL
> test on the "allow" line.
>
> case #2 is expected behaviour.
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Sending json error messages

2016-02-01 Thread Sreenath BH
Hi All,

We want to send error message in json format when external acl deny's a request.
Even if we send a json formatted message (using message= key value
pair) in external helper, the final output is still html.

We have a custom error file in share/error/templates directory, and we
use %o to pickup the message token.

Is there any way to not send any html tags at all and simply send
whatever was output by the external helper?

I am trying to understand the contents of the files in template folder
but is going above my head.

thanks for any help,
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Sending json error messages

2016-02-01 Thread Sreenath BH
I believe ICAP or eCAP would be better suited for our needs. But
having invested into the external_acl_type helper way of working, I am
exploring what best can be done.

I hope there is a simple way to do this.

Also, ICAP is essentialy another web-server (unless I use eCAP) that I
would like to avoid.

thanks,
Sreenath


On 2/1/16, Eliezer Croitoru  wrote:
> Hey,
>
> I do not have an answer to your question but I wanted to ask a question.
> If you would be able to send the whole page with the data directly to
> the client would it be OK for your use case?
> It's just that based on your external helper logic it might be possible
> to use ICAP or eCAP instead of an external acl helper(if indeed your
> helper is externel_acl type)
>
> Eliezer
>
> On 01/02/2016 19:53, Sreenath BH wrote:
>> Hi All,
>>
>> We want to send error message in json format when external acl deny's a
>> request.
>> Even if we send a json formatted message (using message= key value
>> pair) in external helper, the final output is still html.
>>
>> We have a custom error file in share/error/templates directory, and we
>> use %o to pickup the message token.
>>
>> Is there any way to not send any html tags at all and simply send
>> whatever was output by the external helper?
>>
>> I am trying to understand the contents of the files in template folder
>> but is going above my head.
>>
>> thanks for any help,
>> Sreenath
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] conditionally running url rewriter helper

2016-02-08 Thread Sreenath BH
Hi All,

Is there a way to make Squid invoke the external URL helper only for
some requests(depending on some component of the PATH)?

While it is possible to check the URL and take no action inside the
rewriter, I want to know if this overhead can be avoided.

thanks for any hints.
Sreenath
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users