> On 3 Oct 2015, at 19:09, Dhalgren Tor wrote:
>
> Was going to wait a few days before reporting back, but early results
> are decisive.
>
> The overload situation continued to worsen over a two-day period, with
> consensus weight continuing to rise despite the relay often running in
> a state
Was going to wait a few days before reporting back, but early results
are decisive.
The overload situation continued to worsen over a two-day period, with
consensus weight continuing to rise despite the relay often running in
a state of extreme overload and performing its exit function quite
terr
On 10/2/15, jensm1 wrote:
> You're saying that you're on a 1Gbit/s link, but you are only allowed to
> use 100Mbit/s. Is this averaged over some timescale?
More than 100MB which is 60 TB/month total for both directions.
Is 100 TB/month, a common usage tier. Has a FUP (fair usage
policy) attached
You're saying that you're on a 1Gbit/s link, but you are only allowed to
use 100Mbit/s. Is this averaged over some timescale? If so, you could
try and play around with the 'RelayBandwidthBurst' setting. Increasing
the Burst might help reduce the queue delay when you're near saturation,
assuming the
"So" indeed. For the time that was under discussion:
cell-stats-end 2015-10-02 00:28:54 (86400 s)
cell-processed-cells 20220,420,72,18,8,4,1,1,1,1
cell-queued-cells 2.00,0.25,0.01,0.00,0.09,0.10,0.02,0.00,0.00,0.00
cell-time-in-queue 203,131,17,7,2832,6198,3014,802,21,26
cell-circuits-per-decile
On Thu, Oct 1, 2015 at 11:41 PM, Tim Wilson-Brown - teor
wrote:
>
> We could modify the *Bandwidth* options to take TCP overhead into account.
Not practical. TCP/IP overhead varies greatly. I have a guard that
averages 5% while the exit does 10% when saturated and more
when running in good bala
> On 2 Oct 2015, at 01:19, Dhalgren Tor wrote:
>
> On Thu, Oct 1, 2015 at 10:17 PM, Yawning Angel
> wrote:
>>
>> Using IP tables to drop packets also is going to add queuing delays
>> since cwnd will get decreased in response to the loss (CUBIC uses beta
>> of 0.2 IIRC).
>
> Unfortunately tr
On Thu, Oct 1, 2015 at 10:17 PM, Yawning Angel wrote:
>
> Using IP tables to drop packets also is going to add queuing delays
> since cwnd will get decreased in response to the loss (CUBIC uses beta
> of 0.2 IIRC).
Unfortunately true. Empirical arrival to a better result is the idea.
When satur
On Thu, 1 Oct 2015 19:05:38 +
Dhalgren Tor wrote:
> 3) observing that statistics show elevated cell-queuing delays when
> the relay has been in the saturated state, e.g.
>
> cell-queued-cells 2.59,0.11,0.01,0.00,0.00,0.00,0.00,0.00,0.00,0.00
> cell-time-in-queue 107,25,3,3,4,3,7,4,1,7
So?
U
On Thu, Oct 1, 2015 at 7:45 PM, Steve Snyder wrote:
>
> Another consumer of bandwidth is name resolution, if this is an exit node.
> And the traffic incurred by the resolutions is not reflected in the relay
> statistics.
>
> An exit node that allocates 100% of it's bandwidth to relaying traffic
On Thursday, October 1, 2015 3:05pm, "Dhalgren Tor"
said:
[snip]
>
> You are overlooking TCP/IP protocol bytes which add between 5 and 13%
> to the data and are considered billable traffic by providers. At 18M
> it's solidly over 100TB, at 16.5M it will consume 97TB in 31 days.
Another consum
On Thu, Oct 1, 2015 at 5:12 PM, Moritz Bartl wrote:
> On 10/01/2015 06:28 PM, Dhalgren Tor wrote:
>> This relay appears to have the same problem:
>> sofia
>> https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B841859D
>
> This is one of ours, and works just fine and the way it's
On 10/01/2015 06:28 PM, Dhalgren Tor wrote:
> This relay appears to have the same problem:
> sofia
> https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B841859D
This is one of ours, and works just fine and the way it's supposed to?
Your 1800 is quite near the 16.5 MByte/s i
This relay appears to have the same problem:
sofia
https://atlas.torproject.org/#details/7BB160A8F54BD74F3DA5F2CE701E8772B841859D
On Thu, Oct 1, 2015 at 12:33 PM, Dhalgren Tor wrote:
> Have a new exit running in an excellent network on a very fast server
> with AES-NI. Server plan is limited
>Maybe use this:
>
>MaxAdvertisedBandwidth
This setting causes the relay to limit the self-meausre value
published in the descriptor. Has no effect on the measurement system.
Would be helpful if it did.
___
tor-relays mailing list
tor-relays@lists.torpr
On Thu, Oct 1, 2015 at 1:33 PM, Tim Wilson-Brown - teor
wrote:
>
> On 1 Oct 2015, at 15:22, Dhalgren Tor wrote:
>
> If the relay stays overloaded I'll try a packet-dropping IPTABLES rule
> to "dirty-up" the connection.
>
> Please reduce your BandwidthRate until your relay load is what you want it
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
You only mentioned the 100TB plan limit, this is why I suggested
AccountingMax. I couldn't have guessed you are talking about some
other policy limits.
The consensus weight is your bandwidth measured by the bandwidth
authorities. This is used by cli
> On 1 Oct 2015, at 15:22, Dhalgren Tor wrote:
>
> On Thu, Oct 1, 2015 at 1:10 PM, Tim Wilson-Brown - teor
> wrote:
>>
>> Can you help me understand what you think the bug is?
>
> Relay is assigned a consensus weight that is too high w/r/t rate
> limit. Excess weight appears to be due to hig
On Thu, Oct 1, 2015 at 1:10 PM, Tim Wilson-Brown - teor
wrote:
>
> Can you help me understand what you think the bug is?
Relay is assigned a consensus weight that is too high w/r/t rate
limit. Excess weight appears to be due to high quality of TCP/IP
connectivity and low latency of relay. Resul
> On 1 Oct 2015, at 15:03, Dhalgren Tor wrote:
>
> On Thu, Oct 1, 2015 at 12:55 PM, Tim Wilson-Brown - teor
> wrote:
>>
>> On 1 Oct 2015, at 14:48, Dhalgren Tor wrote:
>>
>> A good number appears to be around 65000 to 7, but 98000 was just
>> assigned.
>>
>>
>> Since I don’t have your
On Thu, Oct 1, 2015 at 12:59 PM, s7r wrote:
> Ouch, that's wrong.
I have it correct. You are mistaken.
See https://www.torproject.org/docs/tor-manual.html.en
and read it closely.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lis
On Thu, Oct 1, 2015 at 12:55 PM, Tim Wilson-Brown - teor
wrote:
>
> On 1 Oct 2015, at 14:48, Dhalgren Tor wrote:
>
> A good number appears to be around 65000 to 7, but 98000 was just
> assigned.
>
>
> Since I don’t have your relay fingerprint, I don’t know where you got this
> figure from.
F
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Ouch, that's wrong.
"BandwidthBurst" and "BandwidthRate" refer to bandwidth consumed by
Tor as a client, e.g your localhost SOCKS5. If you are trying to limit
RELAYED traffic, as in sent and received by your relay functionality
you should use:
"Rela
>Don't cap the speed if you have bandwidth limits. The better way to do it is
>using AccountingMax in torrc. Just let it run at its full speed less of the
>time and Tor will enter in hibernation once it has no bandwidth left.
Not possible. Will violate the FUP (fair use policy) on the account.
> On 1 Oct 2015, at 14:48, Dhalgren Tor wrote:
>
> On Thu, Oct 1, 2015 at 12:40 PM, Tim Wilson-Brown - teor
> wrote:
>>
>> How did you set this limit? What did you write in your torrc file?
>>
> ...
>
> is now
>
> BandwidthBurst 1650
> BandwidthRate 1650
> TokenBucketRefillInterval
On Thu, Oct 1, 2015 at 12:40 PM, Tim Wilson-Brown - teor
wrote:
>
> How did you set this limit? What did you write in your torrc file?
>
was
BandwidthBurst 1800
BandwidthRate 1800
TokenBucketRefillInterval 10
is now
BandwidthBurst 1650
BandwidthRate 1650
TokenBucketRefillInterv
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hello,
Don't cap the speed if you have bandwidth limits. The better way to do
it is using AccountingMax in torrc. Just let it run at its full speed
less of the time and Tor will enter in hibernation once it has no
bandwidth left.
Example:
remove Re
> On 1 Oct 2015, at 14:33, Dhalgren Tor wrote:
>
> Have a new exit running in an excellent network on a very fast server
> with AES-NI. Server plan is limited to100TB so have set a limit
> slightly above this (1800 bytes/sec) thinking that bandwidth would
> run 80-90% of the maximum and ave
Have a new exit running in an excellent network on a very fast server
with AES-NI. Server plan is limited to100TB so have set a limit
slightly above this (1800 bytes/sec) thinking that bandwidth would
run 80-90% of the maximum and average to just below the plan limit.
After three days the assi
29 matches
Mail list logo