>
> Hi Fred,
> you cannot expect a higher % saving without using a storeid tool.
> 20% saved band is already good with a simple squid...
>
> bye Fred
>
>
Yes but enough for me, saving bandwidth is just one part of my usage ...
It was just an interesting test, compare 6 proxies with same, high
Hi Fred,
you cannot expect a higher % saving without using a storeid tool.
20% saved band is already good with a simple squid...
bye Fred
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792p4673368.html
Sent from the S
Hi all,
Just for information, mixed results were obtained
The HIT increases 30% to 40%, but the bandwidth saved still the same +- 20%
And the load average and cpu resource are a little more important (regex for
refresh pattern I suppose)
Fred
___
s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This is only example. It is obvious we need investigate every case
separately and write/correct rules if it is needed.
Big mistake to assume that there is a magic set of rules that is
suitable for all occasions. Which allows to achieve a high hit
On 3/09/2015 3:04 a.m., Yuri Voinov wrote:
>
> Here is another case with the same image:
>
> http://i.imgur.com/qM52aPQ.png
>
> The same, right?
>
> So, I proposed to leave thousands of copies of the same image, even
> within a single user session, just because someone is afraid once again
> to
On 3/09/2015 2:58 a.m., Yuri Voinov wrote:
>
> Here is an example.
>
> Look at this three screenshots.
>
> First. Two images requested by one client at the same time.
>
> http://i.imgur.com/JbMhTQ4.png
>
> This is the same image:
> http://i.imgur.com/4khcCOT.png
> http://i.imgur.com/Ya58kfG.pn
On 3/09/2015 12:23 a.m., Yuri Voinov wrote:
>
> Look at this:
>
> http://i.imgur.com/gbkU20r.png
>
> Pay your attention to reply times. With hit ratio not above 30% will
> also occurs unacceptable delays on clients.
>
> So, I see no reasons to have cache with low hit ratio in any case. IMHO
> n
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Here is another case with the same image:
http://i.imgur.com/qM52aPQ.png
The same, right?
So, I proposed to leave thousands of copies of the same image, even
within a single user session, just because someone is afraid once again
to cache? And I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Here is an example.
Look at this three screenshots.
First. Two images requested by one client at the same time.
http://i.imgur.com/JbMhTQ4.png
This is the same image:
http://i.imgur.com/4khcCOT.png
http://i.imgur.com/Ya58kfG.png
Agree?
And -
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Look at this:
http://i.imgur.com/gbkU20r.png
Pay your attention to reply times. With hit ratio not above 30% will
also occurs unacceptable delays on clients.
So, I see no reasons to have cache with low hit ratio in any case. IMHO
need to tune ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
30% is too low hit ratio to have cached proxy in infrastructure. There
is simple no reason to cache anything with low hit. It's enough to buy
more external throuthput. Agree?
Yes, I use 3.4.x version with custom settings. It seems safe enough for
On 02/09/2015 13:00, Yuri Voinov wrote:
I'm getting a very high hit ratio in my cache.And I do not intend to
lower its with myself. Enough and that on the opposite side of the
thousands of webmasters counteract caching their content on its own
grounds. Beginning from YouTube.
Well, Most sane s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm getting a very high hit ratio in my cache.And I do not intend to
lower its with myself. Enough and that on the opposite side of the
thousands of webmasters counteract caching their content on its own
grounds. Beginning from YouTube.
02.09.15 1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Not to use ignore-must-revalidate refresh_pattern for content.
So far, my approach has not caused a single problem with customers. And,
in my opinion, you're too insure fearing cache more aggressively. If I
complain about problems with the site -
On 02/09/2015 12:46, Yuri Voinov wrote:
all, but I assume that you do not want innocent victims, like the few
gifs that actually have a different image depending on the parameter.
May be, may be not. Most often I deal with unscrupulous webmasters who
deliberately do the same unfriendly content ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
02.09.15 4:57, Marcus Kool пишет:
>
>
> On 09/01/2015 03:57 PM, Yuri Voinov wrote:
>>
> This is bad idea - to cache the same gifs with unique parameters. They
keeps unchanged for one HTTP-session in best case. You cache will
overloads with this s
On 09/01/2015 03:57 PM, Yuri Voinov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This is bad idea - to cache the same gifs with unique parameters. They keeps
unchanged for one HTTP-session in best case. You cache will overloads with this
small same gifs with unique parameters.
Onl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This is bad idea - to cache the same gifs with unique parameters. They
keeps unchanged for one HTTP-session in best case. You cache will
overloads with this small same gifs with unique parameters. Only store
ID saves this situation. In other hand,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
02.09.15 0:16, Marcus Kool пишет:
>
>
> On 09/01/2015 03:08 PM, Yuri Voinov wrote:
>>
> Better to write store-id rule which cut off parameters and store gif.
>
> Something like this:
>
>
^https?:\/\/(.+?)\/(.+?)\.(js|css|jp(?:e?g|e|2)|gif|png|bmp
On 09/01/2015 03:08 PM, Yuri Voinov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Better to write store-id rule which cut off parameters and store gif.
Something like this:
^https?:\/\/(.+?)\/(.+?)\.(js|css|jp(?:e?g|e|2)|gif|png|bmp|ico|svg|web(p|m))
http://$1.squidinternal/$2.$3
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
And, finally, trackers is relatively easy to block ;) Simple. Against
caching and garbaging cache storage. With ufdbGuard, for example :)
02.09.15 0:00, Marcus Kool пишет:
>
>
> On 09/01/2015 05:14 AM, FredB wrote:
>> More precisely
>>
>> I reduce
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Better to write store-id rule which cut off parameters and store gif.
Something like this:
^https?:\/\/(.+?)\/(.+?)\.(js|css|jp(?:e?g|e|2)|gif|png|bmp|ico|svg|web(p|m))
http://$1.squidinternal/$2.$3
And, of course, universal rule for sto
On 09/01/2015 05:14 AM, FredB wrote:
More precisely
I reduced the ttl of the first line
refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 10080 100% 10080
#All File 30 days max
refresh_pattern -i
\.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt)(\?.*)?$ 43200
100% 43200 ignore-no-
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
01.09.15 18:40, FredB пишет:
>
>
>> Hi Fred,
>> By keeping objects 30 days maxi, does it mean you expect to upgrade
>> all
>> windowsupdate objects in 30 days ?
>>
>> I'm still thinking we should have an option forcing some type of
>> objects
>>
windowsupdate is http, no ssl here...
Bye Fred
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/refresh-pattern-and-same-objects-tp4672792p4673014.html
Sent from the Squid - Users mailing list archive at Nabble.com.
__
> Hi Fred,
> By keeping objects 30 days maxi, does it mean you expect to upgrade
> all
> windowsupdate objects in 30 days ?
>
> I'm still thinking we should have an option forcing some type of
> objects
> that could never be deleted... ;o)
>
> Bye Fred
>
>
Hi
Yes perhaps, actually it's just
On 1/09/2015 9:32 p.m., FredB wrote:
>
>>>
>>> refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 43200 1000% 43200
>>> -> This is my previous rule "http"
>>
>> Yes.
>>
>> Oh, and there is the less common .chm could be in that set too.
>>
>
>
> Ok added
>
> A last point there is a real difference
Hi Fred,
By keeping objects 30 days maxi, does it mean you expect to upgrade all
windowsupdate objects in 30 days ?
I'm still thinking we should have an option forcing some type of objects
that could never be deleted... ;o)
Bye Fred
--
View this message in context:
http://squid-web-proxy-cach
> >
> > refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 43200 1000% 43200
> > -> This is my previous rule "http"
>
> Yes.
>
> Oh, and there is the less common .chm could be in that set too.
>
Ok added
A last point there is a real difference between (\?.*)?$ and (?.*)?$ Here
http://www.squ
On 1/09/2015 7:55 p.m., FredB wrote:
>
>>
>> Trying to avoid override-no-store as long as possible, and target it
>> to
>> problem sites when it is used.
>>
>> And after placing this at the end of the patterns:
>>
>> (\?.*)?$
>>
>>
>
>
> Something like this ?
>
> refresh_pattern -i \.(htm|htm
More precisely
I reduced the ttl of the first line
refresh_pattern -i \.(htm|html|xml|css)(\?.*)?$ 10080 100% 10080
#All File 30 days max
refresh_pattern -i
\.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt)(\?.*)?$ 43200
100% 43200 ignore-no-store reload-into-ims store-stale
refre
> The cases I have personally seen that you might run into serious
> trouble
> with are .tiff files, TFF is a "high quality" format. At least its
> very
> high in detail, and I've seen it used with only no-store protection
> to
> send medical, mapping and hi-res photographic data around by softwar
On 1/09/2015 4:01 a.m., FredB wrote:
>
>>
>> I'm thinking about something like this
>>
>>
>
>
> Sorry wrong move :)
>
> So, What I meant was
>
> I'm thinking about something like this
>
> # HTTP 1/1
> # The refresh_pattern rules applied only to responses without an explicit
> expiration tim
>
> I'm thinking about something like this
>
>
Sorry wrong move :)
So, What I meant was
I'm thinking about something like this
# HTTP 1/1
# The refresh_pattern rules applied only to responses without an explicit
expiration time
# min 1440 minutes
# Max 10080 minutes
# http 10080 / 60 /
I'm thinking about something like this
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
On 22/08/2015 5:06 a.m., FredB wrote:
> Thank Amos, very interesting as usual
>
> So, my vision was old school (HTTP 1.0), I should read the recent
> documentations to find something optimal for my caches without side
> effect, in the past (squid 2.x I guess) I saw some objects changed in
> websit
Thank Amos, very interesting as usual
So, my vision was old school (HTTP 1.0), I should read the recent
documentations to find something optimal for my caches without side effect, in
the past (squid 2.x I guess) I saw some objects changed in website who were
never delivered by Squid (always th
On 21/08/2015 11:39 p.m., FredB wrote:
> Hi all,
>
> I think I misunderstand something but why refresh pattern is not useless ?
> I mean the objects are supposed to be delivered with instructions from the
> web server, lifetime, creation time, etc
>
Well, we like it when they do. Since that ma
Hi all,
I think I misunderstand something but why refresh pattern is not useless ?
I mean the objects are supposed to be delivered with instructions from the web
server, lifetime, creation time, etc
I thought, and it seem I'm wrong ?, that squid check the HTTP header when the
object seems expi
Amos,
With this type of config, we'll keep in cache all stale and popular objects,
I think we need special options:
save_big_file on/off
save_big_file_min_size 128 MB
save_big_file_max_time 1 years
It'll be more clear and precise, can we count of these options soon ?
Bye fred
--
View this mes
Amos,
We do use "cache_replacement_policy heap LFUDA", so it should do the job as
you explain, right ?
If i understand you correctly, we should also use something like that
"max_stale 1 year", correct ?
Thanks in advance.
Bye Fred
--
View this message in context:
http://squid-web-proxy-cache
On 21/08/2015 8:36 p.m., Stakres wrote:
> Hi Amos,
> Is that possible to have a dedicated option with the Squid to keep objects
> in the cache if they're regulary used even if the time is expired ?
> Cleaning small expired files (<16kb) is not a problem but we must keep big
> files into the cache i
Hi Amos,
Is that possible to have a dedicated option with the Squid to keep objects
in the cache if they're regulary used even if the time is expired ?
Cleaning small expired files (<16kb) is not a problem but we must keep big
files into the cache if often used.
There are many "small" ISPs with 2,
On 21/08/2015 2:38 a.m., Stakres wrote:
> Hi All,
>
> Maybe someone gets the info already...
> A refresh_pattern with 1 week maxi, if the same object is "visited" (coming
> from the squid cache) every day, will the object be deleted 1 week after the
> first cache action or will the squid add +1 we
Hi All,
Maybe someone gets the info already...
A refresh_pattern with 1 week maxi, if the same object is "visited" (coming
from the squid cache) every day, will the object be deleted 1 week after the
first cache action or will the squid add +1 week each time the object is
used from the cache ?
My
45 matches
Mail list logo