At 08:26 8/2/2015 -0700, you wrote:
>I wonder if you could just run sslyze (or another
>TLS scanning tool) on the OR ports of all the
>relays, and see what ciphersuites they accept.
The info would be indicative, but it
would not reflect client-only Tor, which
represents the majority of installatio
Of course! This is implicit in my posting.
What I am saying is that, like old v1/v2
handshakes, Tor should be moving in the
direction of eliminating DHE. The
way to approach that is to *count*
the number of DHE handshakes and
other TLS session attributes. This
is currently begin done for TOR/NT
At 08:26 8/2/2015 -0700, you wrote:
>It also may not tell you their ordering
>preference (but it might! again,
>you'd have to look at the code.)
That "openssl s_client" test I ran was
against my 0.2.6.10 with openssl 1.0.2
relay.
It's certain that ECDHE is preferred over
DHE, but my thought is th
FYI list
https: // torstatus DOT blutmagie DOT de
is registering relays running 0.2.7.2 as
having zero bandwidth.
I believe this is because the
write-history
read-history
lines in
extra-info
have moved from the top down to the
middle of the document.
Many super-fast relays running
In the next-above thread I had mistakenly
conflated relay handshakes and 'openssl'
TLS negotiations, which are it seems
entirely independent. Thanks to Yawning
for correcting that misconception.
TLS encryption protects the relay-to-relay
conversation protocol if I understand
correctly, while cell
Apologies for my fuzziness regarding
the handshake type vs connection TLS
level as independent and other detail.
But I did have an approximation of the
correct idea. . .made the right
recommendation.
___
tor-relays mailing list
tor-relays@lists.torproje
>Bug: Assertion r == 0 failed in crypto_generate_dynamic_dh_modulus at
>../src/common/crypto.c:1788.
>
Looks like you have DynamicDHGroups enabled
in your torrc file.
This is interesting because the recent
LogJam research indicates the NSA
has probably broken commonly used 1024
bit DH groups, wh
Ended up reading rephist.c for awhile.
Seems the up/down time is accumulated
with downtime discounted %5 every
12 hours.
So absolute worst-case age-out for
100% downtime would be something
like .95 ^ 76 (35.5 days) to get
down it to 2%. Since the outage
is not as bad as that, the guard
flag shou
Does anyone know how long it takes for the
guard flag to come back after a multi-day
relay downtime?
Know that WTU (weighted time up) is 98%,
and new-relay TK (time known) minimum is
eight days.
Reading rephist.c the the WTU is
calculated from uptime history but rather
then spend awhile to fully
Welcome.
This event led to my discovering the
Unmeasured=1 flag in the cached-*consensus
files (was wondering where it was).
That new BWauth is badly needed. See
about 460 unmeasured relays and from
an unscientific sample it seems
like many of them are stuck-at-20
exit nodes--valuable resources
Much of Tor traffic is from long-term
circuits moving bulk data, so apparently
it will take many hours or even days
for rebalancing to fully take effect.
Is not clear whether it will cause
serious trouble or not.
My thought is that one BWauth in
a consensus is better than self-measure,
as BWauths
FYI list
https://trac.torproject.org/projects/tor/ticket/16696
Description
At present both 'longclaw' and 'maatuska' have
dropped out of the BW consensus ('longclaw'
is restarting with new version, not sure
about 'maatuska').
This has caused the BW consensus logic to revert
to using relay self-
At 22:05 7/28/2015 -0400, you wrote:
>
>A couple of minor notes regarding ASNs:
>
Also the AS number assigned to an IP
address may legitimately vary depending
on the source/observer. This is due to
the relativistic nature of BGP routing.
For example a Comcast address 74.95.187.105
is listed in AS
>(where a lot of IPs changed their AS from
>IANA to Digital Ocean)
A couple of minor notes regarding ASNs:
1) many IPs fall under a hierarchy of
ASs where a large core-network provider
(e.g. Level3) advertises a block and a
second client leaf-AS advertises a sub-
block. Sometimes the core AS adv
Longclaw is broken again. Has not updated
55% of the the relay BS measurments for
forty hours.
Updated the ticket below with details
and raise it to "major" issue.
At 16:42 7/25/2015 -0400, you wrote:
>Realized not everyone monitors this
>list so I opened a Trac ticket:
>
>https://trac.torproj
Perhaps a way to do it is reset the
consensus for a relay if its IP address
moves to a different Autonomous System.
Is rare that dynamic IP causes relays
to hop ASs (e.g. possibly SBC/ATT),
and list of exceptions could be created
for the few cases where it causes trouble.
CYMRU has a dynamic serv
Realized not everyone monitors this
list so I opened a Trac ticket:
https://trac.torproject.org/projects/tor/ticket/16667
At 11:19 7/25/2015 -0700, you wrote:
>Interesting. Thanks starlight for the head's up! I'm not
>aware of any known issues with longclaw right now
>so looping in its operator
Subject says it.
Thought it was strange that its measurement
of my relay has not been moving. Pulled the
vote history and grep'ed a dozen relay
measurements and none of them have changed.
___
tor-relays mailing list
tor-relays@lists.torproject.org
http
But all is not well. . .
Gabelmoo just crashed the weight
for the relay here from 7150 to
945 in a single step. No way
this is correct for a cleanly
running 9375 Mbyte/sec relay
with a 24-hour average load
of 2175 Mbyte/sec per Blutmagie
and 24-hour self measured peak
of 5165. Peak easily runs
a
Over the last week it seems to me
that some improvement has gone into
TorFlow and the BWauths are performing
better. In particular the weightings
calculated by each BWauth have
stabilized and become less erratic.
Several relays in the same region
of a large network as my relay are now
showing a r
Identified the issue:
Problem occurs when the client function
in the relay daemon cannot establish
circuits to the guard for any reason
whether because it's down or because
DDOS/sniper attack rate limiting
has been activated by the guard.
This shows up most obviously when
the guard(s) are manuall
Circuit count seems to have mostly reverted to normal:
Tor's uptime is 4 days 18:00 hours, with 6691 circuits open. . .
on 3990 channels
found an object overview:
http://www.kmcg3413.net/aman/index.py?page=tor_internals.txt
[connection]
[channel]
[circuit]
[stream]
___
I've seen this about three times, maybe
four. Anyone have any idea what's going
on? Version 0.2.6.10 on Linux. Showed
up in 0.2.5.x and 0.2.4.x earlier.
1) seems to happen only one time per relay start
after a few hours up to a couple of days;
weeks or months between occurrences
2) m
Strange, right after the event the reported
number of "circuits" went up hugely
--have never seen this:
Heartbeat: Tor's uptime is 4 days 0:00 hours, with 5359 circuits open. . .
Heartbeat: Tor's uptime is 4 days 6:00 hours, with 17524 circuits open. . .
But the line-count from "GETINFO orconn-st
I've seen this about three times, maybe
four. Anyone have any idea what's going
on? Version 0.2.6.10 on Linux. Showed
up in 0.2.5.x and 0.2.4.x earlier.
1) seems to happen only one time per relay start
after a few hours up to a couple of days;
weeks or months between occurrences
2) m
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt
line 1138-1139:
>Authorities MUST generate a new
>signing key and corresponding
>certificate before the key expires.
???
___
tor-relays mailing list
tor-relays@lists.torproject.org
https:/
The authority node Faravahar's key
expired and it dropped off the
consensus. Looks purposeful.
Does anyone know why? Are authority
public keys embedded in the relay
code or learned? New release required
for new key? Is this usual?
Just curious.
___
At 10:04 7/4/2015 +0200, nusenu wrote:
>I find it more worrying that we do not "hear"
>about the 'more serious attacks' that keep
>them busy and don't allow them to look into
>i.e. 'AviatoChortler' (even after a few
>weeks). That might mean that there is a
>constant stream of 'more serious attacks
Looking through the list of suspect
exit nodes, they fall into three
categories:
1) very low-bandwidth Window Vista
systems running 0.2.4.21
2) recently dead/offline
3) one massive Linux exit
The (1) exits might be hacked Windows
systems, perhaps part of a botnet
--no contact given. Seems a
Could someone comment on why 15 exit nodes
discovered to be sniffing and abusing login
credentials have not been marked with the
BAD EXIT flag?
The research appears to be legitimate, involved
a good deal of effort, and seems credible:
https://chloe.re/2015/06/20/a-month-with-badonions/
was blogg
Takes a full two weeks for the
new-relay cap to be aged-out.
Patience.
At 13:33 7/2/2015 +, Speak Freely wrote:
>
>I did as s7r suggested, updated to 0.2.7.1-alpha-dev,
>and changed fingerprints.
>
>And we're still crippled by CW of 20.
>(Yes, yes, I know. The relay has gone back
>to stage 1
It looked interesting so I turned on
EntryStatistics 1
ConnDirectionStatistics 1
HiddenServiceStatistics 1
in addition to the default
DirReqStatistics 1
and the output is indeed interesting.
Also have
CellStatistics 1
though this information does not seem
to be published in the ExtraInfo dige
At 02:43 6/6/2015 -0400, starlight.201...@binnacle.cx wrote:
>I'm back to complain further about
>erratic bandwidth authority behavior,
>previously. . .
MYSTERY SOLVED!!!
Of course one should always RTFS (read
the fine specification) when trying to
understand all things Tor.
First, scratch my pr
>Mon Jun 8 05:25:55 UTC 2015
>> UBSAN seems expensive and doesn't seem
>> it would run other than test, but
>> I didn't work on it long and am not
>> 100% certain. Was trying ASAN extra
>> stack checking at the time, which may
>> have been the culprit.
>
>As I said in my previous email, if you're
A straightforward improvement to BWauth
measurement crossed my mind.
Seems likely part of the volatile,
bipolar measurement issue is overfast
feedback of weighting increases and
the increased traffic that results.
For example, a BWauth measures 8 MByte/sec
of bandwidth day one and increases
the a
>Mostly, latency and packet loss in nodes
>will negatively impact Tor's usability. . .
Was reading this thread and thought
of potentially useful and perhaps
not overly difficult BWauth
enhancement--perhaps worth
considering as part of ongoing
efforts to rework the BWauth
scripts.
When measuring a
At 04:12 6/7/2015 +1000, teor wrote:
>Please let me know how you go - the 0.2.6.x
>series should also be relatively ASAN
>and UBSAN clean, as Tor has been tested
>with them since late 2014.
I've run 0.2.4.x and 0.2.5.x with ASAN
live in production with no problems
when the relay had less bandwidth
I'm back to complain further about
erratic bandwidth authority behavior,
previously
[tor-relays] BWauth kookiness
https://lists.torproject.org/pipermail/tor-relays/2015-May/007039.html
Allowing that BWauths are in a bit of flux,
with tor26 replaced by maatuska and
moria1 dropping the GuardFractio
Should probably add that this
relay was tuned and would be
expected to show somewhat better
performance than a typical relay
of the same capacity, but
I wouldn't expect more than
a 20-30% boost from that.
The local bandwidth observation
was, at the time of the consensus
sample
published 2015-05-2
Some possible fallout related to
the earlier discussion
[tor-relays] amount of unmeasured relays continuously rising since 2 weeks
https://lists.torproject.org/pipermail/tor-relays/2015-May/007003.html
My relay
Binnacle 4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2
has 9375 Kbyte (75 Mbit) of u
Generally openssl security fixes do not
change API/ABIs. In this case I built
the newer release without export ciphers,
thought that seems an unlikely cause.
Perhaps it was memory corruption.
Relay is running on non-ECC CPU,
though the system has never exhibited
any problems related to memory
cor
Yesterday built openssl-1.0.1m and dropped
it over the top of 1.0.1j. Tor said
OpenSSL version from headers does not match the version we're running with. If
you get weird crashes, that might be why. (Compiled with 100010af: OpenSSL
1.0.1j 15 Oct 2014; running with 100010df: OpenSSL 1.0.1m 19 M
Does anyone have any idea what this is about?
Is it serious?Apr 05 22:18:59.320 [Warning] Got a bad signature on a networkstatus vote
Apr 05 22:18:59.320 [Warning] Got a bad signature on a networkstatus vote
Apr 05 22:18:59.320 [Warning] Got a bad signature on a networkstatus vote
Apr 05 22:18:59.
At 13:56 4/5/2015 -0600, you wrote:
TSLAGrin D0BA 2462 E224 2DE9 7F6F 7837 8F58 E63B AD51 88F8
Right answer, wrong question.
Your relay is not yet published so
it cannot be viewed in https://atlas.torproject.org
Can you connect locally with nc/netcat/ncat?
You should run 'netstat -nt' to check
I find it a bit odd that moria1 is reporting
exactly double the bandwidth for the relay
here:
Binnacle 4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2
gabelmoo Bandwidth=7385 Measured=5810
tor26Bandwidth=7385 Measured=6090
moria1 Bandwidth=7385 Measured=18700 GuardFraction=47
longclaw Bandwidth=73
45 matches
Mail list logo