On 21/03/18 08:12, Michael Pro wrote:
> Totally agree with you, and at the same time - do not agree. But,
> consider the following situation. There is https://site.net/ where
> there is 1.jpg and 2.jpg. If I download from this site 1.jpg from the
> address 1.1.1.1 and 2.jpg from the address 2.2.2.2
On 03/20/2018 01:12 PM, Michael Pro wrote:
> Totally agree with you, and at the same time - do not agree.
AFAICT, I only stated facts, not opinions. There is nothing to agree or
disagree with in my response.
Alex.
___
squid-users mailing list
squid-user
Totally agree with you, and at the same time - do not agree. But,
consider the following situation. There is https://site.net/ where
there is 1.jpg and 2.jpg. If I download from this site 1.jpg from the
address 1.1.1.1 and 2.jpg from the address 2.2.2.2. Even more. There
are situations when you nee
Forgot about:
My server is relatively modest (more resources just do not need :))
Just 8 cores (Xeon 2.3 GHz), 16 Gb RAM, SAS HDD's 10k RPM (~300 Gb in
RAID-10) :)
Overall CPU usage is ~3% (with SSL Bump). And half of RAM is free :)
20.03.2018 23:14, Yuri пишет:
>
> 20.03.2018 23:10, Yuri пиш
20.03.2018 23:10, Yuri пишет:
>
> 20.03.2018 23:03, FredB пишет:
>> Hi Yuri,
>>
>> 200 mbits, more or less 1000/2000 simultaneous users
>>
>> I increase children value, because the limit is reached very quickly
> Because of SSL processing to slow. Investigate, why. Simple increasing
> number of
20.03.2018 23:03, FredB пишет:
> Hi Yuri,
>
> 200 mbits, more or less 1000/2000 simultaneous users
>
> I increase children value, because the limit is reached very quickly
Because of SSL processing to slow. Investigate, why. Simple increasing
number of children exghausting your RAM.
>
>> and on
Hi Yuri,
200 mbits, more or less 1000/2000 simultaneous users
I increase children value, because the limit is reached very quickly
> and only 100 MB on disk?
100 MB by process, no ? I think I should reduce this value and rather increase
the max of children
Maybe such load is just impossible
20.03.2018 21:30, FredB пишет:
> Hi all,
>
> I'm testing SSLBump and Squid eats up all my CPU, maybe I made something
> wrong or maybe some updates are required ? Any advice would be greatly
> appreciated.
>
> Debian 8.10 64 bits, Squid 3.5.27 + 64 Go ram + SSD + 15 Cores Xeon(R) CPU
> E5-2637
On 03/20/2018 04:38 AM, Rafael Akchurin wrote:
> Is there any unique transaction ID in the Squid’s inner workings that I
> can see in the ICAP server, by passing it as additional X-* ICAP header?
Unfortunately, no. %sn comes close to that but due to implementation
bugs and backward compatibility
On 21/03/18 04:30, FredB wrote:
> Hi all,
>
> I'm testing SSLBump and Squid eats up all my CPU, maybe I made something
> wrong or maybe some updates are required ? Any advice would be greatly
> appreciated.
Not sure about CPU consumption. AFAIK that is related to traffic loading
on the crypto l
On 03/20/2018 07:55 AM, Amos Jeffries wrote:
> 2) It is technically possible to make Squid open a CONNECT tunnel
> through an HTTP peer proxy to the origin instead of going there
> directly. The only thing preventing this is nobody writing the necessary
> code.
>
> It has been on my (and many othe
On 03/20/2018 05:11 AM, Michael Pro wrote:
> Question: how can we break the established channel (unpinn it) along
> the old route and establish a new channel along the new route, when we
> already know how.
Squid supports using multiple sequential connections for the same
from-client request, but
Hi all,
I'm testing SSLBump and Squid eats up all my CPU, maybe I made something wrong
or maybe some updates are required ? Any advice would be greatly appreciated.
Debian 8.10 64 bits, Squid 3.5.27 + 64 Go ram + SSD + 15 Cores Xeon(R) CPU
E5-2637 v2 @ 3.50GHz
FI, I don't see anything about li
2018-03-20 10:25 GMT-04:00, Amos Jeffries :
> On 19/03/18 23:03, Anoop Sreedharan wrote:
>> Dear Team,
>> We have an IT environment catering to educational institute wherein we
>> have approx more than 1000 users accessing the internet.
>>
>> having a volume based internet subscription, we are in n
On 19/03/18 23:03, Anoop Sreedharan wrote:
> Dear Team,
> We have an IT environment catering to educational institute wherein we
> have approx more than 1000 users accessing the internet.
>
> having a volume based internet subscription, we are in need to have a
> solution wherein i need to restric
On 21/03/18 00:11, Michael Pro wrote:
> squid-5 master branch, not have personal/private repository changes,
> not use cache_peer's ability, (if it's matters - not use transparent
> proxying ability).
>
> We have a set of rules (ACL's with url regex) for content, depending
> on which we make a d
On 19/03/18 19:13, Kiru Pananthan wrote:
> Hi Amos
>
> I have removed *. dashboard and also timetable which is not in use.
>
> I have added the accel after port number and removed vhost as per your
> advice. Can you check the file now, am I good to go. I have not yet run
> the query "squid -k pa
On 20/03/18 03:40, Kiru Pananthan wrote:
> Hi Amos
>
> I have run the command of "squid -k parse" and attached output in the
> config file link
> Config file URL
> https://goo.gl/Q4a749
>
You see anything looking odd in that output?
Many of the wrong syntax things I have mentioned should also
On 20/03/18 23:38, Rafael Akchurin wrote:
> Greetings all,
>
> I am trying to find the best (easiest, least interfering) solution for
> the following problem.
>
> Our custom ICAP server writes various information about ICAP transaction
> (user name, policy ip, detection module, timings, words tri
squid-5 master branch, not have personal/private repository changes,
not use cache_peer's ability, (if it's matters - not use transparent
proxying ability).
We have a set of rules (ACL's with url regex) for content, depending
on which we make a decision for the outgoing address, for example,
fro
Greetings all,
I am trying to find the best (easiest, least interfering) solution for the
following problem.
Our custom ICAP server writes various information about ICAP transaction (user
name, policy ip, detection module, timings, words triggered detection, etc)
into the record database. This
21 matches
Mail list logo