On 16/03/2016 6:51 a.m., Heiler Bemerguy wrote:
>
> Hi joe, Eliezer, Amos.. today I saw something different regarding high
> bandwidth and caching of windows updates ranged requests..
>
> A client begins a windows update, it does a:
> HEAD to check size or something, which is ok..
> then a ranged
Hey,
Your words describe the BUG in his wildest and simplest form.
Please file a bug report to follow the progress.
Writing here more and more will not be really a good help as it is.
Eliezer
On 15/03/2016 19:51, Heiler Bemerguy wrote:
Hi joe, Eliezer, Amos.. today I saw something different r
Hi joe, Eliezer, Amos.. today I saw something different regarding high
bandwidth and caching of windows updates ranged requests..
A client begins a windows update, it does a:
HEAD to check size or something, which is ok..
then a ranged GET, which outputs a TCP_MISS/206,
then the next GET gives
do you have refresh_all_ims onin tour squid.conf ??
put it off
and try reload_into_ims off as well
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Sudden-but-sustained-high-bandwidth-usage-tp4676366p4676664.html
Sent from the Squid - Users mailing list
Thanks,
I'm with you no this but it's not clear to many sys\cache admins that
caching windows updates is the "tiny" bit of the wide Internet.
Eliezer
On 14/03/2016 17:37, Heiler Bemerguy wrote:
My colleagues here asked me the same question but I prefer to really FIX
the caching of bigfiles/
My colleagues here asked me the same question but I prefer to really FIX
the caching of bigfiles/rockstoredfiles/rangeDLs instead of doing
something specific for windows updates.
To be honest, windows updates are just a simple example of ranged
downloads of big files making squid/rockstore g
Hey,
I have a question, in your scenario, if you would be able to statically
cache all these updates using nginx, or another cache_peer, would it
sound OK? or good enough?
Eliezer
On 14/03/2016 16:32, Heiler Bemerguy wrote:
Hi Eliezer and Joe!!!
Thank you very much for your support.
I ha
Hi Eliezer and Joe!!!
Thank you very much for your support.
I have done a test here too. I've replaced 3.5.15 with 3.5.14 and the
high bandwidth (associated with SWAPFAIL) is GONE.
I've checked twice the sources diffs between 14 and 15 and can't tell
what break this.. but I'm running 3.5.14
regarding swapfail
after i suffer alot even on latest squid v
what i found is if you have lets say 32geg ram and you specify cache_mem 10
GB or whatever size you have
if it reach that it start happening swap fail mostly on fast smole object
like js file or jpg not more then 100k max
and same
OK it's pretty simple to reproduce on any machine what so ever on 3.5.15-2.
open two terminals on two machines more or less.
Then run on one the next command
watch -n 0.2 "http_proxy=http://IP_OP_PROXY:3128/ curl --silent --range
20-40 http://ngtech.co.il/squid/videos/sosp2011_27.mp4 | wc -c"
Hey,
Thanks for the debug!.
I do not know the exact reason but I can say for sure that it's not the
NetAPP or any other OS level issue since the AUFS\UFS cache_dir works
fine in the same system and in a similar situation.
I will try to replicate it locally.
I do understand the issue and I wil
I managed to track down with GDB one of these swapfails...
Breakpoint 1, clientReplyContext::cacheHit (this=0x33bff58, result=...)
at client_side_reply.cc:471
471 http->logType = LOG_TCP_SWAPFAIL_MISS;
(gdb) l
466 debugs(88, 3, "clientCacheHit: request aborted");
467
Hi Eliezer,
We usually don't restart it ever. Only recently I've been restarting it
because of these issues. The shutdown_lifetime is set to 5 seconds only.
We are still getting SWAPFAIL_MISS without any apparent reason, and if
it is for a RANGE request, it would multiply it to many parallel
Hey,
I wanted to ask something very specific, how often do you restart the
service if at all? what shutdown_flifetime
[http://www.squid-cache.org/Doc/config/shutdown_lifetime/] are you using?
Eliezer
On 09/03/2016 15:17, Heiler Bemerguy wrote:
Hi Amos,
Now you can help me on tracking it d
On 10/03/2016 2:17 a.m., Heiler Bemerguy wrote:
>
> Hi Amos,
>
> Now you can help me on tracking it down.. lol... can you? I don't know
> what debug_options (apart of 88,3) I should enable.
88,9 to see what else is happening in and around that. I took a quick
look and saw that 88,5 has details a
Hi Amos,
Now you can help me on tracking it down.. lol... can you? I don't know
what debug_options (apart of 88,3) I should enable.
I just know that disabling range_offset will eliminate this issue,
because it won't even try to cache range requests. Also, it didn't
happen when I was using AUF
On 9/03/2016 11:38 a.m., Heiler Bemerguy wrote:
>
> While debugging, found this:
>
> 2016/03/08 18:22:49.212 kid2| 88,3| client_side_reply.cc(463) cacheHit:
> clientCacheHit:
> http://au.v4.download.windowsupdate.com/c/msdownload/update/software/uprl/2016/03/windows-kb890830-x64-v5.34_e0074d1fa34
On 9/03/2016 4:26 a.m., Heiler Bemerguy wrote:
>
> This way it won't cache any "range" downloads as "range_offset_limit 0"
> is the default option and it will make squid only download what the
> client requested.
>
> From squid-cache.org: "A size of 0 causes Squid to never fetch more than
> the c
On 09/03/2016 10:54, L.P.H. van Belle wrote:
No,
Aufs :
cache_dir aufs /var/spool/squid 9216 16 256 max-size=100663296
Then the cases are different by nature...
you have 9GB and he uses 90+++ GB, you are using AUFS which is a FS
based and he is using ROCK which is a DB structure.
The issues
squid-cache.org
> Onderwerp: Re: [squid-users] Sudden but sustained high bandwidth usage
>
> On 09/03/2016 09:59, L.P.H. van Belle wrote:
> > With the settings i already told you. Today is ms update day and hee..
> > its caching my windows updates .. so go try them ou
On 09/03/2016 09:59, L.P.H. van Belle wrote:
With the settings i already told you. Today is ms update day and hee..
its caching my windows updates .. so go try them out.
Are you using ROCK cache_dir ??
Eliezer
___
squid-users mailing list
squid-use
Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Namens
Heiler Bemerguy
Verzonden: dinsdag 8 maart 2016 23:39
Aan: squid-users@lists.squid-cache.org
Onderwerp: Re: [squid-users] Sudden but sustained high bandwidth usage
While debugging, found this:
2016/03/08 18:22
While debugging, found this:
2016/03/08 18:22:49.212 kid2| 88,3| client_side_reply.cc(463) cacheHit:
clientCacheHit:
http://au.v4.download.windowsupdate.com/c/msdownload/update/software/uprl/2016/03/windows-kb890830-x64-v5.34_e0074d1fa34d00f8b35e6d5c7be86729c263.exe,
0 bytes
2016/03/08 18
Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Namens
> Alex Rousskov
> Verzonden: dinsdag 8 maart 2016 6:05
> Aan: squid-users@lists.squid-cache.org
> Onderwerp: Re: [squid-users] Sudden but sustained high bandwidth usage
>
> On 03/07/2016 07:00 PM, Amo
-users [mailto:squid-users-boun...@lists.squid-cache.org] Namens
> Alex Rousskov
> Verzonden: dinsdag 8 maart 2016 6:05
> Aan: squid-users@lists.squid-cache.org
> Onderwerp: Re: [squid-users] Sudden but sustained high bandwidth usage
>
> On 03/07/2016 07:00 PM, Amos Jeffries
On 03/07/2016 07:00 PM, Amos Jeffries wrote:
> Its a minor bug in the report display that they are not having more
> columns with separate numbers for each worker.
IMHO, the *summary* page should not include such noise. There should be
a way to request worker-specific stats instead.
Even for stat
Thanks for the Interpretation.
I didn't found any bug report that is related to the subject.
I will try to add it into the bugzilla later.
Eliezer
On 08/03/2016 04:00, Amos Jeffries wrote:
On 8/03/2016 10:00 a.m., Eliezer Croitoru wrote:
I do not know exactly what this means from the info pa
On 8/03/2016 10:00 a.m., Eliezer Croitoru wrote:
>
> I do not know exactly what this means from the info page:
> Maximum number of file descriptors: 81920
80K FD are available to Squid.
The rest gets strange..
> Largest file desc currently in use: 6157
> Number of fi
On 08/03/2016 00:08, Heiler Bemerguy wrote:
I don't know how to explain these FD numbers. I'm using EXT4 and I don't
know what are vmware cache disks.
Since it's a VM, there are couple options for a DATASTORE in vmware ESXi.
A description about the different options is at:
https://www.vmware.com
On 03/07/2016 02:00 PM, Eliezer Croitoru wrote:
> I do not know exactly what this means from the info page:
> Maximum number of file descriptors: 81920
> Largest file desc currently in use: 6157
> Number of file desc currently in use: 8216
>
> If the number of FD curren
I know what is happening. I just don't know how to fix it without
breaking windows updates caching.
The "extra" traffic is coming from windows updates mirrors.
/acl wupdatecachable url_regex -i
(microsoft|windowsupdate)\.com.*\.(cab|exe|ms[i|u|f]|dat|zip|psf|appx|appxbundle|esd)//
//range_o
On 07/03/2016 22:08, Yuri Voinov wrote:
90 Gb first, 300 Gb second.
Thanks but...
Wouldn't it be much simpler and cheaper to just use WSUS instead all of
the hassle??(if it's a closed business environment)
And when does the TCP_SWAPFAIL_MISS happens? always?
And a little tweak for the squid.co
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
08.03.16 0:38, Heiler Bemerguy пишет:
> skyrocketing = using our maximum link download bandwidth.
> This machine is only proxying. Not being a firewall, not a router, nor
a gateway. It has access to the internet through our gateway/firewall
(pfse
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
BTW,
_all_ Windows updates is much more than ~400 Gb. AFAIK :)
08.03.16 1:07, Eliezer Croitoru пишет:
> Sorry about the confusion\misunderstanding.. my brains cache is kind of
> tiny\short and
I am not sure but was it you that asked about the bi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
90 Gb first, 300 Gb second.
08.03.16 1:07, Eliezer Croitoru пишет:
> Sorry about the confusion\misunderstanding.. my brains cache is kind of
> tiny\short and
I am not sure but was it you that asked about the big NETAPP cache a
question not long a
Sorry about the confusion\misunderstanding.. my brains cache is kind of
tiny\short and I am not sure but was it you that asked about the big
NETAPP cache a question not long ago? or was it someone else? I am maybe
confusing because the other one had more clients but a similar issue.
I will lat
skyrocketing = using our maximum link download bandwidth.
This machine is only proxying. Not being a firewall, not a router, nor a
gateway. It has access to the internet through our gateway/firewall
(pfsense).
Lots of LAN clients are connected to the proxy, this is their only way
to the interne
On 03/07/2016 10:46 AM, Amos Jeffries wrote:
> There are still issues around the mime headers portion needing to be
> fully within the first slot,
Just to avoid misunderstanding: There is a bug regarding store entry
meta information needing to fit into one slot. HTTP response headers may
(and oft
On 8/03/2016 4:01 a.m., Heiler Bemerguy wrote:
>
> Hi Yuri,
>
> Only rock-store.. as they told me there's no file limit any more...
>
Not being limited does not mean it is a good idea to go huge. It is
still a database made up of 32KB slots. They are just able to be chained
in sequence now to s
On 07/03/2016 16:29, Heiler Bemerguy wrote:
We're still getting all these SWAPFAIL and our link is
skyrocketing.. please help! I think it didn't happen on older
versions (.14 and below)
Hey,
What do you mean by skyrocketing?? like in the graph??
Also it is not clear to me something about t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
07.03.16 21:54, Heiler Bemerguy пишет:
>
> Hi Yuri,
>
> I see this recommended anywhere, as the cache_dirs have different
min-sizes... so it will try one, then another till one fit... is it wrong?
It's stupid. See below.
> The default method is b
Hi Yuri,
I see this recommended anywhere, as the cache_dirs have different
min-sizes... so it will try one, then another till one fit... is it wrong?
The default method is by least-load.. but the load doesn't matter in
this case.. what matters is the min-size/max-size, isn't it?
Best Regards
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
store_dir_select_algorithm round-robin ??? With two dirs only??
07.03.16 21:01, Heiler Bemerguy пишет:
> store_dir_select_algorithm round-robin
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
iQEcBAEBCAAGBQJW3ZxeAAoJENNXIZxhPexG948IAMf6axAq
Hi Yuri,
Only rock-store.. as they told me there's no file limit any more...
maximum_object_size 10 GB
store_dir_select_algorithm round-robin
cache_dir rock /cache2/rock1 9 min-size=0 max-size=32768
cache_dir rock /cache/rock1 30 min-size=32769 max-size=10737418240
Best Regards,
--
He
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Are you uses aufs?
07.03.16 20:29, Heiler Bemerguy пишет:
>
> Hi guys
>
> We're still getting all these SWAPFAIL and our link is
skyrocketing.. please help! I think it didn't happen on older
versions (.14 and below)
>
> /1457358929.643953
Hi guys
We're still getting all these SWAPFAIL and our link is
skyrocketing.. please help! I think it didn't happen on older
versions (.14 and below)
/1457358929.643953 10.23.0.63 TCP_SWAPFAIL_MISS/206 1450553 GET
http://au.download.windowsupdate.com/c/msdownload/update/software/upr
On 5/03/2016 10:54 a.m., Heiler Bemerguy wrote:
>
> Hi Amos,
>
> It seems the "quick_abort_min -1 KB" did the trick. But I remember that
> "range_offset_limit" should overrule that.. isn't it?
Yes, it is supposed to. It seems the docs are incorrect.
> Also, I saw people using -1 instead of "non
Hi Amos,
It seems the "quick_abort_min -1 KB" did the trick. But I remember that
"range_offset_limit" should overrule that.. isn't it?
Also, I saw people using -1 instead of "none" for range_offset_limit..
is it the same? :P
/quick_abort_min -1 KB//
//acl wupdatecachable url_regex -i
(micro
On 4/03/2016 5:01 p.m., Amos Jeffries wrote:
> On 4/03/2016 4:49 a.m., Heiler Bemerguy wrote:
>>
>> Hi Amos,
>>
>> You didn't notice it was always the same client ? The same IP address
>> redownloading ad eternum..
>>
>> I managed to fix it by not caching stuff with "?" in it:
>>
>> *refresh_patter
On 4/03/2016 4:49 a.m., Heiler Bemerguy wrote:
>
> Hi Amos,
>
> You didn't notice it was always the same client ? The same IP address
> redownloading ad eternum..
>
> I managed to fix it by not caching stuff with "?" in it:
>
> *refresh_pattern -i (/cgi-bin/|\?) 0 0% 0*
>
> But I don't know if
Hi Amos,
You didn't notice it was always the same client ? The same IP address
redownloading ad eternum..
I managed to fix it by not caching stuff with "?" in it:
*refresh_pattern -i (/cgi-bin/|\?) 0 0% 0*
But I don't know if it's the best approach..
The URL was like that:
/10.101.1.50 TCP
On 3/03/2016 10:33 a.m., Heiler Bemerguy wrote:
>
> Hello guys..
>
> Thanks for the tips. I've ajusted some stuff here and noticed these
> repeated GETS below.. they are HITS, but why is this happening?
Because lots of clients want the object(s).
If they are HITs then whats the problem? Squid i
Hello guys..
Thanks for the tips. I've ajusted some stuff here and noticed these
repeated GETS below.. they are HITS, but why is this happening? lol
I have "*range_offset_limit none*" for this domain (*ws.microsoft.com*) and:
*/refresh_pattern -i
(microsoft|windowsupdate)\.com.*\.(cab|exe|ms
On 2/03/2016 10:57 a.m., Heiler Bemerguy wrote:
>
> Hey guys.
>
> For the third time, we got a sudden high bandwidth usage, almost saturating
> our
> link, and it won't stop until squid is restarted.
> I'm totally SURE this inbound traffic comes from squid. It's like it's
> download
> stuff i
Hey guys.
For the third time, we got a sudden high bandwidth usage, almost
saturating our link, and it won't stop until squid is restarted.
I'm totally SURE this inbound traffic comes from squid. It's like it's
download stuff itself
Look that after squid was restarted near 10:45, the n
55 matches
Mail list logo