Software error:
Can't connect to the database.
Error: Access denied for user 'bugs'@'localhost' (using password: YES)
Is your database installed and up and running?
Do you have the correct username and password selected in localconfig?
For help, please send mail to the webmaster ([no address g
we are using squid a a perimeter egress filter. one think I've recently noticed
is based on my current config it's possible to make a request through squid to
an HTTPS endpoint with out doing a CONNECT request.
I was wondering if this should be allowed behavior for a proxy or if it's just
a bu
dose squid send to cache peer ssl after ssl_bump clear link or ?
how ssl work between squid and peer ? do i need keys
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f
what is the benefit from setting real-time priority to squid
or at least higher priority then the default ??
using nice -n 20 or a bit higher would it harm the sistem ??
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-
look at
http://raspbian.raspberrypi.org/raspbian/dists/buster/contrib/
binary-all dose not exist lol 404 response normal
squid has nothing to do with it so test your link wen you have problem
without squid
squid can not cache ghost
-
**
* Crash to the future
on solaris crash is it bug to report or ?
cache.log
2019/07/30 10:10:23 kid1| Error negotiating SSL connection on FD 22:
error:0001:lib(0):func(0):reason(1) (1/0)
2019/07/30 10:10:37 kid1| Error negotiating SSL connection on FD 54:
error:0001:lib(0):func(0):reason(1) (1/0)
2019/
we AR in future and less caching by site move to https
the only left mostly partial object or large file get disconnected half way
so caching range will help gain more HIT
any attempt in near future to make squid cache range file?
-
**
* Crash to the future **
try that
Store ID helper AKA Dynamic Content Booster is scalable, enterprise-grade,
high-performance multi-threaded content deduplicator. It works with any
Squid starting with v3 and with all products based on it.
https://github.com/yvoinov/store-id-helper
-
**
*
m/uZVn6f4Q
squid.conf: https://pastebin.com/D49H5rYS
squid -k parse: https://pastebin.com/F0U2SvUm
--
Joseph M Jones
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
d.conf: https://pastebin.com/D49H5rYS
squid -k parse: https://pastebin.com/F0U2SvUm
--
Joseph M Jones
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
after all those missing include
and aded missing #include to proxyp/Header.h
another issue
pconn.cc: In member function ‘void PconnPool::push(const
ConnectionPointer&, const char*)’:
pconn.cc:435:32: error: ‘%s’ directive output may be truncated writing
up to 777 bytes into a region of s
please re test
Proposed fix is at https://github.com/squid-cache/squid/pull/375 did not
help fixing all
proxyp/Elements.h
#ifndef SQUID_PROXYP_ELEMENTS_H
#define SQUID_PROXYP_ELEMENTS_H
#include "sbuf/SBuf.h"
#include
#include
fix that and more problem after
In file included from Header.cc:11:
alex its essay i do download patch by patch and do my Owen test
wen i have problem to make sure i download Latest 5.x series release and
test again
to match my problem if its bobo i made or its bug that you guys made
no auto at all
hard work manual test is best friend
notice the Subject ^
Log PROXY protocol v2 TLVs; fix PROXY protocol parsing bugs
(#342) i think
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
In file included from Reply.cc:14:
../../src/helper.h:264:18: error: ‘map’ in namespace ‘std’ does not
name a template type
typedef std::map RequestIndex;
^~~
../../src/helper.h:264:13: note: ‘std::map’ is defined in header
‘’; did you forget to ‘#include â€
so fare swap.state affected
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-us
i have
cache_effective_user proxy
cache_effective_group proxy
wen i use squid without -N group and owner ar both proxy for swap.state
wen i use squid -N swap.state group stay proxy but owner change to root
and it give access deny
is this bug ??
-
**
* Crash to t
so 1.000 is sec thank you im trying to have my graph correct response time
thank you for your help
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
wen issuing squidclient mgr:store_id
the time value 1.000 is 1ms or?
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
__
store id helper can minimize cdn links
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
first thank you for the Helper_states page i could not find it
second i have 4 identical server 3 never have this issue
flag = BW
all server have very hi traffic and the timing max betwean 0.300 up to
1.500
just one server have the BW issue and cpu get hi as well one pc behind for
testing it w
question on mgr:store_id
wen Flags hasBW
BBUSY
WWRITING
any idea wat is the caus of that
1 network uplink delay
2 squid
3 helper
i need help thanks
-
**
* Crash to the future
**
--
Sent from:
http://squid-we
suks reload dose not help but wen i re click on master it showed the latest
commit tks
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
this wat i see in master is this correct Commits on Oct 8, 2018 latest just
wondering
https://i.imgur.com/8bmvaBV.png
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f101909
wen a code get Approved for merge it get closed without updating the code
section is that normal
last update i see
ntlm_fake_auth: add ability to test delayed responses (#294) …
please re check
-
**
* Crash to the future
**
--
S
loads with out issue now.
Thanks,
--
Joseph M Jones
Senior Application Engineer
EAN – Expedia Affiliate Network
From: squid-users on behalf of Alex
Rousskov
Sent: Thursday, October 11, 2018 16:44
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid workers
I'm trying to find a root cause for failed workers. We have three squid
instances that act as transparent forward proxies that limit internet
connectivity for our network by doing url whitelisting. Current throughput per
instances is about 90MB/s. after a restart of squid all workers seem to be
thank you for the long detail i use the latest source and squid 5
i never use repo squid
and i do know how to compile just if you dont want me to report issu then
fine
tks
-
**
* Crash to the future
**
--
Sent from:
http://squid-we
ya gcc 8 is paranoid strict every one know that i disable
-Wno-maybe-uninitialized
got more and more and now gcc8 is standard in debian so yadij know all
about he start converting code hoop he will continue spetial the bad one
pconn.cc: In member function ‘void PconnPool::push(const
ConnectionP
hi after i upgrade my debian i got gcc 8 and warnings being treated as
errors
cache_cf.cc: In function ‘void parse_time_t(time_t*)’:
cache_cf.cc:2981:36: error: ‘tval’ may be used uninitialized in this
function [-Werror=maybe-uninitialized]
*var = static_cast(tval/1000);
yes but its expensive since squid dose not have partial caching
acl force_full_download url_regex \.esd \.exe \.psf \.cab \.ipa \.zip \.pkg
\.msp
range_offset_limit -1 force_full_download
quick_abort_min -1 force_full_download
quick_abort_max 0 KB
quick_abort_pct 100
-
***
Eliezer i agree with you with all that
there is no such a secure think for client as long as the web bug exist :)
those large link with small size or so
the main reason i think they ar going to have more secure is to kill https
proxy so big company
can sale there Owen prox with very expensive ke
Encrypted SNI completely kills SSL Bump and all will follow that new SNI
Encryption
is there a hoop that start reworking adding this option to squid
https://appuals.com/apple-cloudflare-fastly-and-mozilla-devise-solution-to-encrypting-sni/
-
**
* Crash to the f
Ralf Hildebrandt wrote
> * joseph <
> chip_pop@
> >:
>> https://github.com/yvoinov/squid-ecap-gzip
>
>>>>URL returns 404!
> right it will be posted again sorry for that some small change require
> to be done
> i will announce w
https://github.com/yvoinov/squid-ecap-gzip
This Software is an eCAP adapter for HTTP compression with GZIP and DEFLATE.
It is fully re-worked, bugfixed and improved version, ready for production
use, based on Constantin Rack's https://github.com/c-rack/squid-ecap-gzip
adapter
full source
-
hi if any one interested for high-performance multi-threaded Store ID helper
with command-line options
visit https://github.com/yvoinov/store-id-helper
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.101909
hi i tried to patch
http://www.squid-cache.org/Versions/v5/changesets/squid-5-23da195f75b394d00ddac4fa67ce6895d96292d7.patch
file dose not exist src/ssl/stub_libsslutil.cc should i ignore this and
consider it an extra mistake in that patch or ??
even i download latest release and dose not exist
h
hi also lower maximum_object_size_in_memory 4096 KB to
maximum_object_size_in_memory 1 MB higher not wise
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
_
what i was trying to say is about partial range support bro lol and eduard
work is perfect
like making squid cache partial of the files :)
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.c
thank you and thanks to eduard bagdasaryan
no more memory overrun on his big patch
hoop someone soon work on real range caching since the real world ar https
mostly now
so adding real range caching will save us more bandwidth on http squid
server
-
**
* Cr
testing https://github.com/squid-cache/squid/pull/155
working fine :)
i will report if something else came up but so fare so good tks
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squ
sorry this fix the start rebuild if swap.state is empty
but there is more issue wen you stop squid it delete the swap.state and
re create new one empty
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.101
found where the bug and i made it work but i dont know wat is the best
in UFSSwapDir.cc function Fs::Ufs::UFSSwapDir::openTmpSwapLog(int
*clean_flag, int *zero_flag)
was -->*zero_flag = log_sb.st_size == 0 ? 1 : 0;
change to *zero_flag = log_sb.st_size == 0 ? 0 : 1;
best to change int to b
NO SMP just normal configuration
cache_mem 500 MB
memory_pools off
cache_dir aufs /mnt/cache-a 50 128 512
cache_dir aufs /mnt/cache-b 50 128 512
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.101
>>Since our tests for that change were successful
did you restart wile your test was success i guess not
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
_
alex tested on 2 environmentdebian 9 and solariss
same shame just keep for 30 minute running caching and do squid restart and
chek your swap.state you will notice its empty almost on 2 environment
totally different ok
i removed that patch and all fine swap.state stay perfect and all fine
using squid-5.0.0-20180218-r3b65960 release kill my cache dir some how
swap.state is empty and the size of the cached dir has 30 geg
if i run for couple HR and the swap.state grow like couple meg then
restart squid
it become empty swap.state but the cached item in storage stay
i removed patch
is there any future fix for the write thread
#define ASYNC_OPEN 1
#define ASYNC_CLOSE 0
#define ASYNC_CREATE 1
#define ASYNC_WRITE 0
#define ASYNC_READ 1
if i enable ASYNC_WRITE 1 #define ASYNC_CLOSE 1
i get
WARNING: failed to unpack metadata because store entry metadata is
corrupted
exc
using latest squid 5
-
**
* Crash to the future
**
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists
Amos Jeffries wrote
> "range_offset_limit N" only affects the initial starting point for a
> transaction. If a client wants to start reading a range somewhere in the
> first N bytes Squid will request the full file in order to cache it for
> future requests. Otherwise only the range the client w
tks another question related to this
lets say client start downloading obj and it reach 50% at the speed 128k
wen the client leave or stop downloading the download continue at full
speed right from that 50% up to the end or it re start from the beginning ??
its important to understand those mix
if the client exit download squid will force full speed on obj dose not
stay at that rate
acl limit_ext_bandwith url_regex \.esd \.exe
delay_pools 1
delay_class 1 1
delay_parameters 1 128000/128000
delay_access 1 allow limit_ext_bandwith
delay_access 1 deny all
range_offset_limit -1 limi
Amos Jeffries wrote
> On 23/11/17 01:25, joseph wrote:
>> after investigating bug was exist but hidden all update ar correct
>> https://bugs.squid-cache.org/show_bug.cgi?id=3279
>> bug 3279 not yet fixed in v5
>> hint if they adapt the v3 patch will solve the
after investigating bug was exist but hidden all update ar correct
https://bugs.squid-cache.org/show_bug.cgi?id=3279
bug 3279 not yet fixed in v5
hint if they adapt the v3 patch will solve the problem :)
-
**
* Crash to the future
wat about missing file
src/ssl/stub_libsslutil.cc
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-u
can't find file to patch at input line 951
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--
|diff --git src/ssl/stub_libsslutil.cc src/ssl/stub_libsslutil.cc
|index 006be27..e0d3803 100644
|--- src/ssl/stub_libsslutil.cc
|+++ src/ssl/stub_
Amos Jeffries wrote
> On 04/10/17 22:45, joseph wrote:
>> amos i do use your patch and i get that error
>>
>
> Are you running bootstrap.sh after applying? It depends on changes to
> the autotools build system, so has no effect if you just apply it and
> re
amos i do use your patch and i get that error
here is my configure and i use debian 9.x with gcc 7 now
libtool: compile: x86_64-linux-gnu-g++ -DHAVE_CONFIG_H -I../..
-I../../include -I../../lib -I../../src -I../../include
-I/usr/include/openssl -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-
just note i gess its the gcc v 6.x and 7 strictly
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/s
i gess this patch you post i did befor patch it
and the problem is conversion lol please re check
gadgets.cc: In function ‘const ASN1_BIT_STRING*
Ssl::X509_get_signature(const CertPointer&)’:
gadgets.cc:960:25: error: invalid conversion from ‘ASN1_BIT_STRING** {aka
asn1_string_st**}’ to
OpenSSL 1.1.0f 25 May 2017
--
Sent from:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
latest debian and latest gcc in repo
squid-5.0.0-20170919-r478fb99.tar.gz
gcc version 6.3.0 20170516 (Debian 6.3.0-18)
any idea or its some code need to be converted to work with latest gcc ??
gadgets.cc: In function ‘const ASN1_BIT_STRING*
Ssl::X509_get_signature(const CertPointer&)’:
g
its ben cpl day
wen i use search
Gateway Timeout
The gateway did not receive a timely response from the upstream server or
application.
Apache/2.4.7 (Ubuntu) Server at bugs.squid-cache.org Port 80
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/http-bugs-
well this work almost 10 year
an u can do 2 mark if you want to make shur u use same marking
new-routing-mark=http
on each range
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-box-for-two-networks-tp4683119p4683197.html
Sent from the Squid - Users
>> ROUTERWIFI( WANstatic ip 192.168.1.40/24 gw 192.168.1.20) LAN
192.168.0.1/24)
is it mikrotik or other specify pls
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-as-gateway-tp4683022p4683194.html
Sent from the Squid - Users mailing list archive
you might need his configuration
/ip firewall address-list
add address=192.168.110.0/24 comment="one route port 80" list=http-route
add address=192.168.115.0/24 comment="two route port 80" list=http-route
/ip firewall mangle
add action=mark-routing chain=prerouting comment=\
"Clients HTTP ro
tks alex i will investigate more i will capture port and link to identify
the causes and packet as well
i dont remember on wish patch start getting those more and more
i may have to start removing patch one by one and test
its ben a one weak since i monitor the acsess.log error before that
i have over then 1 daily this error befor was 0 or at least not much
using squid 5 latest patch up to r15228
my setup same nothing change
ps.. not only those clients ip the hole range affected so its not from
specific client that has some trouble so all my client produce this
1498
>>No. The cache file contains a TLV structure of metadata followed by the
right but so it should be a TLV bindery and after that ?? HTTP/1.1 200 OK
wish is text clear or anything betwean thim as this -->>
accept-encodingHTTP/1.1 200 OK
accept-encoding and status line on one line also
1 ac
if you open the hex cached file
first header or so should be HTTP/1.1 200 OK
right ???
is this correct in one line will be cacheable hit or corrupted
accept-encodingHTTP/1.1 200 OK ???
good cache header file example
HTTP/1.1 200 OK
Via: cache-yes
Content-Type: image/x-icon
Last-Modified
right
lets say i have obj with this header
if server send the same obj with different Set-Cookie value that will be
MISS i was refairing to this
not to the vary if it has cookie sorry if i did not explain it correctly
Server: nginx
Date: Mon, 22 May 2017 15:44:59 GMT
Content-Type: image/gif
so if the server send same obj with new Cookie it will be miss since the
Cookie dose not match in cached obj
regarding the topping sorry if i ask it will clear all those question in
all the question has Ben asked before
example cached file header
link =
http://sa.bbc.co.uk/bbc/bbc/s?nam
is Set-Cookie: saved in cached file as well ?? amos
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-issue-of-caching-the-m3u8-file-tp4682674p4682714.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_
tks guys
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/request-future-option-tp4682648p4682665.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.sq
well another good idea
clientside_tos ds-field [!]aclname ...
--
acl normal_service_net src 10.0.0.0/24
acl good_service_net src 10.0.1.0/24
clientside_tos 0x00 normal_service_net
clientside_tos 0x20 good_service_net
almo
its privet file wen matching url
i need to control qos_flows tos local-hit=0x?? to a diferent bit then
squid if it will be done
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/request-future-option-tp4682648p4682651.html
Sent from the Squid - Users mailin
file
i need to control the qos bit send hit or miss
tks in advanced.
regard joseph
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/request-future-option-tp4682648.html
Sent from the Squid - Users mailing list archive at Nabble.com
you have to mark packet on mangle the DSCP(tos) = 12 first did you
and in squid add qos_flows tos local-hit=0x30 miss=0xFF
and in queues pic the marked packet name so it will serve the cached HIT to
your clients
if you can not do it i help u out
--
View this message in context:
http:
is it possible in future to change the ecap to use concurrency like the
store-id
so we can benefit from that
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/concurrency-with-ecap-tp4682219.html
Sent from the Squid - Users mailing list archive a
is this correct ??
char *
skipLeadingSpace(char *aString)
{
char *result = aString;
while (xisspace(*aString))
++aString;
return result;
}
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/skipLeadingSpace-tp4682110.html
Sent from the Squ
Alex Rousskov wrote
> On 04/13/2017 10:39 AM, Alex Rousskov wrote:
>
>> The "many folks misconfigure access rules" problem may not have a
>> good solution (under Squid control); we should be careful not to make
>> things worse while not solving the unsolvable problem.
>
>
> Here is an alternativ
>Why do you feel the need to purge items selectively from your cache?
i do not feel the need
>If the objects are requested again, you've lost the opportunity to save
some
>bandwidth.
i know wat im doing this is why im in control lol
huge videos data my old provider has bluecoat behind my real band
by using purge command
is there a way just to search for headers that contain specific info
like CF-RAY and purge all ?
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/purge-cmd-and-headers-tp4682053.html
Sent from the Squid - Users mailing list
tks
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-cache-org-tp4681831p4681834.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.o
i can not access from Lebanon any reason ??
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-cache-org-tp4681831.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing lis
lol
mostly of this ar wen vary changed lol
wen they work on vary as it should be then most of those msg will go away
not 100% but most of them
im not going into detail but its the vary in header
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/DiskThreadsDi
v4 also
http://www.squid-cache.org/Versions/v4/changesets/squid-4-14979.patch
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/re-squid-V5-tp4681374p4681375.html
Sent from the Squid - Users mailing list archive at Nabble.com.
__
is this patch in correct place :)
http://www.squid-cache.org/Versions/v5/changesets/squid-5-15021.patch
revno: 15021
revision-id: squ...@treenet.co.nz-20170128033532-yuuv6hosxtsh040f
parent: chtsa...@users.sourceforge.net-20170126162230-
im not here to fight dont mention RFC caus its alredy violating RFC just
using enable-http-violations
pls re read my post or get someone to translate the structure of it
else no benefit explaining or protecting RFC shit
so pls careful reading my point of view else waisting time with one year
exper
depend on country
bye
joseph
so as my structure i have http only and as its squid save me only 5% from
all the http bandwith
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Not-all-html-objects-are-being-cached-tp4681293p4681365.html
Sent from the Squid
> An error occurred during a connection to http://e-vista.scsolutionsinc.com.
> SSL received a weak ephemeral Diffie-Hellman key in Server Key Exchange
> handshake message. Error code: SSL_ERROR_WEAK_SERVER_EPHEMERAL_DH_KEY
Brendan,
What tool did you use to reveal that? I checked the sites SSL c
> Care to add any detail to "can no longer connect"?
The squid server runs on centos 7.2, all corporate desktops all use IE 11, they
simply get a non-descriptive error in IE saying "This page can’t be displayed"
however chrome works for example but none of the desktops have access to
chrome.
The
Hi,
Recently our users can no longer connect to a vendor url
https://e-vista.scsolutionsinc.com/evista/jsp/delfour/eVistaStart.jsp behind
squid.
We have a few sites that don't work well when cached and adding this domain to
that acl has not helped. We are using version 3.3.8.
Any suggestion as t
reverse proxy mode with SSL bump enabled?
Thanks in advance,
Joseph
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Hi,
I have configured squid reverse proxy with eCAP modules.
I have a client machine in which host file is edited and for
www.squid.com(cache
domain in reverse proxy), it will point to squid reverse proxy machine.
What i basically do is like i will append the request url to the cached
domain url
hing faulty in my config?
Regards,
Joseph
On Wed, Aug 12, 2015 at 6:22 PM, Antony Stone <
antony.st...@squid.open.source.it> wrote:
> On Wednesday 12 August 2015 at 14:38:55, joseph jose wrote:
>
> > Hi,
> >
> > I have set up squid in reverse proxy mode to cach
in reverse mode, it is supposed to serve the static page from
webserver(10.0.0.2).
But when i open browser and search for testsquid.com, squid is logging
request but returning a TCP_DENIED/403 status.
Is there anything additionally required in squid config?
Thanks in advance,
Joseph
97 matches
Mail list logo