Re: [tor-relays] oniontip.com
Sebastian Urbach: > Hi, > > I like to suggest adding oniontip.com to the "Donate" section on the > tor website. It's a nice possibility to help the Relay-Oerators. While I think OnionTip is awesome, I'm a little concerned about its apparently built-in lack of external auditability. Why is it generating one-time use Bitcoin addresses, for example? If it is for key material protection reasons, why can't these one-time addresses flow through a single more protected address, that is easy to verify that it is performing as expected? Amusingly, I'm perhaps the most vocal critic about the public visibility of bitcoin transactions on our lists, but in this case, it would provide a clean audit trail for the service, which is already mostly public anyway, at least on the output side. And the input side is the responsibility of the user to keep private with proper address use and/or mixes, at least in the Bitcoin world as it is today. While I'm at it, I have a couple wishlist items for this thing. I don't think these are blockers to recommending the service as much as auditability is, but they sure would be cool: 1. It should allow me to select if I want to donate only to nodes that have the Exit flag. Running an exit is way more involved (and often more expensive) than running a normal node, and I think it would be good to give folks the option to target their donation in this way. And perhaps encourage it as the default donation mode. 2. It also already seems to have GeoIP information, at least on the country level. There are all sorts of interesting selectors that could be done with this. You could donate to relays in countries in inverse proportion to the number of relays they have, to encourage jurisdictional diversity, for example. Or more simply, just pick a country. This one is admittedly less cool and more complicated to figure out than just the Exit vs non-exit thing, though. (Do you also weight countries per-capita? Per internet user? Per Tor user? etc). In my opinion, each of these breakout options should have their own dedicated (intermediate/flow-through?) BTC address, so it is possible to perform auditing for each of them using only the blockchain. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] oniontip.com
Donncha O'Cearbhaill: > Thanks everyone for all the feedback I've received about OnionTip. It > was originally created in a rush during a hackathon so there is > definitely room for improvement. > > Mike Perry, as Mike Cardwell has said, it is currently possible to > select a subset of relays to receive donations by using the filters > (Country, Exit flag, Guard flag) at the the top of the OnionTip page. > I'd like to expand these filters and maybe tweak the defaults to provide > a greater share to exits. Exit bandwidth is more valuable to the > network, and I believe it should be incentivised accordingly. Ah, it was in no way clear to me that I was actually restricting my donation to these nodes as opposed to just viewing them. I suppose I may be dense, but I expect many others will think similarly, especially since the UI for selection says "Only show ..." and not "Only donate to ...". > I completely agree that it's important the service and its payments are > externally auditable. From an implementation point of view, when a user > filters a particular set of relays and clicks the donate button, a new > bitcoin keypair and address is derived and stored in the database along > with the list of relays they've selected. Creating a new address for > each donation is the simplest way of ensuring a users donation goes to > the correct set of relays they select. Forwarding the donation directly > from that one-time-use address to the receiving relay operators also > allows the user to easily and immediately confirm on the blockchain that > their donation was forwarded correctly. Ah, I see. Ok. Makes sense. > From an external point of view, next week I'll add a page to the site > where anyone can view all previously sent transactions. I'll also > publish the master public key which corresponds to the addresses I'm > generating along with a script to confirm they are being generated > without any tricks. Ok. I can't speak for everyone at Tor, but I think this kind of verifiability is what will make it much easier for us to agree to add a link to OnionTip on our donations page. > There's a few other issues in the current implementation which I have > outlined on the Github repo > (https://github.com/DonnchaC/oniontip/issues). I'll send a post to the > list early next week with my proposed solutions and and look for some > feedback before I implement them. Great! Keep us in the loop. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Fwd: Call for a big fast bridge (to be the meek backend)
Forwarded from tor-dev to here, in case any relay operators missed this: - Forwarded message from David Fifield - Date: Mon, 15 Sep 2014 19:12:23 -0700 From: David Fifield To: tor-...@lists.torproject.org Subject: [tor-dev] Call for a big fast bridge (to be the meek backend) User-Agent: Mutt/1.5.23 (2014-03-12) The meek pluggable transport is currently running on the bridge I run, which also happens to be the backend bridge for flash proxy. I'd like to move it to a fast relay run by an experienced operator. I want to do this both to diffuse trust, so that I don't run all the infrastructure, and because my bridge is not especially fast and I'm not especially adept at performance tuning. All you will need to do is run the meek-server program, add some lines to your torrc, and update the software when I ask you to. The more CPU, memory, and bandwidth you have, the better, though at this point usage is low enough that you won't even notice it if you are already running a fast relay. I think it will help if your bridge is located in the U.S., because that reduces latency from Google App Engine. The meek-server plugin is basically just a little web server: https://gitweb.torproject.org/pluggable-transports/meek.git/tree/HEAD:/meek-server Since meek works differently than obfs3, for example, it doesn't help us to have hundreds of medium-fast bridges. We need one (or maybe two or three) big fat fast relays, because all the traffic that is bounced through App Engine or Amazon will be pointed at it. My PGP key is at https://www.bamsoftware.com/david/david.asc if you want to talk about it. David Fifield ___ tor-dev mailing list tor-...@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev - End forwarded message - -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Estimating the value and cost of the Tor network
Hello everyone, I would like to put together some estimates on bounds of the current value and cost of the capacity of the Tor network as it is, and use that to generate some rough guestimates on what it would cost to grow it (assuming we solved all the complicated problems with respect to maintaining diversity and operator incentives). In order to make these estimates, I need some people to tell me: 1. Your node identity fingerprints. 2. How these fingerprints map to hardware, CPU cores, and uplink. 3. How much you are paying for this uplink per month. If you are paying less than market rate because of a friendly ISP, ideally you would also tell me what the standard market rate is at that ISP. If you have multiple nodes at multiple datacenters, please break these costs and fingerprints out individually rather than telling me aggregate information. I'd most like to hear from people who are doing the recommended best-practice of running one Tor process instance per CPU core, are paying for dedicated uplink at around 100Mbit/sec or higher at a datacenter, and who are not CPU bound on any of their Tor processes. If this does not apply to you, I'd still like to hear from you, but please tell me these details of your setup. Exit node data is most useful, but I am happy to hear about non-exits too. I suspect they may be much cheaper on average, and getting some data on this is also important. Unfortunately, bridges are not that useful at this time. You do not need to send this information publicly to the list. I am happy to receive it privately via GPG. My GPG key id is 0x29846B3C683686CC, and that key signs all of my mail to all torproject lists. You can get it here: https://pgp.mit.edu/pks/lookup?op=get&search=0x29846B3C683686CC -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Estimating the value and cost of the Tor network
Moritz Bartl: > Prices vary widely across different countries. We pay between $400 and > $1500 per Gbit/s per month in "popular and cheap locations". In a > scenario where we want to grow the network and at least keep the current > geographical diversity (or even grow it), we'd have to at least equally > strengthen less fortunate locations. Right. With just one or two identity fingerprints, I can give an estimate on the minimum cost to build an equivalent Tor network with the same capacity, as I have already done. This is not very interesting, though, as you point out. But with just one or two good example identity fingerprints (with pricing) in key locations, I can tell us how much investment it would take to build the Tor network with the diversity we want, using our current load balancing and network load. In other words, I can easily calculate what it would cost to ensure that the network path selection was made up of W% of RU, X% of US relays, Y% of EU relays, Z% of JP relays, etc etc. With many more datapoints I can tell us how much the current Tor network actually costs with its current diversity, but I think that is actually less interesting, unless we wanted to be able to make assumptions like "As soon as we start paying people for bandwidth, all of (or X% of) our volunteers will instantly disappear" (which seems unlikely to me, but others think is a realistic concern). *But* In order to do any of this, I need specific identity fingerprints and prices to do that calculation first. Again, I want to extrapolate from real relays, using our current load balancing. So far only two people have given me identity fingerprints with actual pricing information. I need way more. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Estimating the value and cost of the Tor network
I really need identity fingerprints to see how much traffic your node is actually pushing, what its consensus weight is, when and how often it is hibernating, if it is otherwise strangely rate limited, etc. Kees Goossens: > Hello Perry. > 5 TB/month for 20 euro/month at transip in the NL. I run a non-exit relay > there. (Thinking about making it an exit.) > Kees > > -- > Kees on the move > > > On 25 Sep 2014, at 03:03, Mike Perry wrote: > > > > Moritz Bartl: > >> Prices vary widely across different countries. We pay between $400 and > >> $1500 per Gbit/s per month in "popular and cheap locations". In a > >> scenario where we want to grow the network and at least keep the current > >> geographical diversity (or even grow it), we'd have to at least equally > >> strengthen less fortunate locations. > > > > Right. With just one or two identity fingerprints, I can give an > > estimate on the minimum cost to build an equivalent Tor network with the > > same capacity, as I have already done. This is not very interesting, > > though, as you point out. > > > > But with just one or two good example identity fingerprints (with > > pricing) in key locations, I can tell us how much investment it would > > take to build the Tor network with the diversity we want, using our > > current load balancing and network load. > > > > In other words, I can easily calculate what it would cost to ensure that > > the network path selection was made up of W% of RU, X% of US relays, Y% > > of EU relays, Z% of JP relays, etc etc. > > > > With many more datapoints I can tell us how much the current Tor network > > actually costs with its current diversity, but I think that is actually > > less interesting, unless we wanted to be able to make assumptions like > > "As soon as we start paying people for bandwidth, all of (or X% of) our > > volunteers will instantly disappear" (which seems unlikely to me, but > > others think is a realistic concern). > > > > *But* In order to do any of this, I need specific identity fingerprints > > and prices to do that calculation first. Again, I want to extrapolate > > from real relays, using our current load balancing. > > > > So far only two people have given me identity fingerprints with actual > > pricing information. I need way more. > > > > > > -- > > Mike Perry > > ___ > > tor-relays mailing list > > tor-relays@lists.torproject.org > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Oniontip
Thomas White: > Hmmm... appears to be have been upgraded since I last checked then > (which was only a few weeks ago!). Nicely done oniontip. I stand > corrected. Well, my original ask was for everyone to be able to verify that all 12.36 BTC that oniontip has received (as of right now) has actually been distributed how the users have asked. I suppose that since individual users can easily inspect that their donation has gone to the relays they selected (by looking at blockchain.info for their one-time use address), it is unlikely that the system will get away with cheating for long. But it is still hard for a new donor to tell if any other donors have been swindled recently, using simple blockchain inspection. They basically have to click around on the individual relay recipient keys to make sure everything looks legit. This makes me nervous in terms of endorsement. I can easily see hundreds of users getting swindled before one of them suddenly realizes that there is an extra bitcoin address in their transactions that is not in the original relay list they selected, or that the actual bitcoin distribution was slightly different than what they selected. If all users could inspect all donations easily, this type of compromise would be found quicker. Ideally, it would be possible to verify all of these questions (and many more) with only the blockchain. For instance, a comment in the bitcoin transaction could indicate the OnionTip options selected, and a single page on the website could allow us to view all donations to the system. Beyond this, I think there are actually interesting sociological questions we could answer with easy access to the OnionTip donation data and option selection. I'm very curious how donors are choosing to distribute their Bitcoin to the relays. For instance: 1. Is OnionTip encouraging the type of network diversity we want? Do we want to suggest changes to the default donation mode to encourage better diversity? 2. UI is still confusing to me. Is this UI causing people to prefer a certain type of donation over others, where they probably shouldn't? a. Is anyone actually using the Guard or Exit filters? If not, this means my super-cheap and unreliable FDC middle node will probably get me more OnionTip donations than either a more stable Guard node, or a more hassle-prone Exit node. This seems like an undesirable way to incentivize relay operation. Is it happening? Or are most people selecting Guard+Exit? b. Are people taking advantage of the country selection dialog? Are they doing it in a way that is favoring underrepresented countries? Or are people just choosing countries based on the next World Cup match, the current Olympic gold medal count leader, or some other crazy notion that seems to make little sense to network diversity? 3. What are big donors doing? Do they always select the default choice? a. If so, we should think waaay harder about what this choice is. b. If not, what do they want? Do they like specific or strange countries? Do they like countries with the fewest relays? With the lowest current bandwidth? With the best laws? Do we agree with their choices, and want to make it easier for other donors to make them too? Or should we be concerned, and try to encourage other behavior? c. Maybe only big donors get scammed with extra BTC destination addresses or a different transaction entirely? How can I see if other recent big donors have been scammed? > On 28/09/2014 03:28, Ed Carter wrote: > > The process is completely transparent. All Bitcoin transactions > > are viewable by the public on the Bitcoin blockchain. The Bitcoin > > addresses are posted by the relay operators themselves in their > > contact info on their relay. I can confirm that I receive > > donations made to the address I posted on my relay. > > > > My relay: > > https://globe.torproject.org/#/relay/3C49A7D9BEBC668352F627CE60B1FE9B628DD2EA > > > > Blockchain.info web page showing donations received to my > > address: > > http://blockchain.info/address/1GXZVChXoxgrBzqMsCrWGu2ua6VTKSH6U1 > > > >> My concern (which has been highlighted before by Mike Perry) is > >> that the site lacks accountability and transparency. There is no > >> way to verify the donations actually reach the operators. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Oniontip
Donncha O'Cearbhaill: > Thanks everyone for all the feedback. I'm delighted to see OnionTip is > being used and that relay operators are getting some (usually token) > appreciation. > > Mike, I've taken on board all the feedback you gave to this list on 2nd > September. I've just pushed some changes. There is now a listing of all > previous transactions sent from OnionTip, their size and the number or > relays they have selected to pay.[1] > > The number of selected relays gives a rough indication of how many > people are just paying the default (to all the relays) or are setting > their own criteria. > > I've also published a Python script to validate the transactions > completely from the blockchain based on the seed I use to generate > addresses [2]. > > I'm open to all suggests for a better distribution strategy. At the > moment I definitely think the incentive is somewhat wrong when someone > gets a much larger share by running a middle relay in a cheap bandwidth > location than someone running a smaller exit in a geographical diversity > location. > > As most people seem to use the defaults, for a start I'm going to add an > option so that Exits receive a premium on their bandwidth share by > default (maybe 1.5-2x). > > If there are any particular questions anyone has about the data or > donations so far, I'm happy to pull the data from the DB and try to > answer them. For one, I'm going to try find out how many relays had > bitcoin address listed in their first day or two. Maybe it can give an > indication how many new relay operators are being pulled in because of > OnionTip. > > Thanks again for all the feedback so far. I look forward to seeing what > we can do to improve OnionTip, and to continue supporting the growth of > the Tor network. > > Regards, > Donncha > > [1] https://oniontip.com/transactions > [2] > https://github.com/DonnchaC/oniontip/blob/master/scripts/payment-check.py Thank you for publishing these scripts! I think the most important thing right now is for us to be able to easily tell what the system is doing, and I think you have done that. As for what the default *should* be, I think we may want to think about that for a bit depending on what we think we want to encourage in the network. If we get an idea as to if Exits are actually more expensive to run than non-Exits, we can use that to guide these bonuses. Thanks a lot for OnionTip! It's now got my vote for inclusion on the Tor donations page! > On Sun, 2014-09-28 at 02:32 -0700, Mike Perry wrote: > > Thomas White: > > > Hmmm... appears to be have been upgraded since I last checked then > > > (which was only a few weeks ago!). Nicely done oniontip. I stand > > > corrected. > > > > Well, my original ask was for everyone to be able to verify that all > > 12.36 BTC that oniontip has received (as of right now) has actually been > > distributed how the users have asked. > > > > I suppose that since individual users can easily inspect that their > > donation has gone to the relays they selected (by looking at > > blockchain.info for their one-time use address), it is unlikely that the > > system will get away with cheating for long. But it is still hard for a > > new donor to tell if any other donors have been swindled recently, using > > simple blockchain inspection. They basically have to click around on the > > individual relay recipient keys to make sure everything looks legit. > > > > This makes me nervous in terms of endorsement. I can easily see hundreds > > of users getting swindled before one of them suddenly realizes that > > there is an extra bitcoin address in their transactions that is not in > > the original relay list they selected, or that the actual bitcoin > > distribution was slightly different than what they selected. If all > > users could inspect all donations easily, this type of compromise would > > be found quicker. > > > > Ideally, it would be possible to verify all of these questions (and many > > more) with only the blockchain. For instance, a comment in the bitcoin > > transaction could indicate the OnionTip options selected, and a single > > page on the website could allow us to view all donations to the system. > > > > > > Beyond this, I think there are actually interesting sociological > > questions we could answer with easy access to the OnionTip donation data > > and option selection. I'm very curious how donors are choosing to > > distribute their Bitcoin to the relays. > > > > For instance: > > > > 1. Is OnionTip encouraging the type of network div
Re: [tor-relays] Anonbox Project
Andrew Lewman: > Perhaps a more constructive approach is to help define what the golden > standard for a true tor router should be at > https://trac.torproject.org/projects/tor/wiki/doc/Torouter. There are a > bunch of open tickets and design questions which need thought, research, > and solutions. FWIW, I spoke with someone named 'torrouter' in #tor-dev IRC a couple months back. I'm not sure if they were related to this project, but I pointed out that the biggest problem with a simple "torify everything" Tor router is that there are tons and tons of apps on your computer that love to make network connections and broadcast information about your computer to remote servers. This problem was first analyzed in 2008 at PETS: http://www.chiark.greenend.org.uk/~mroe/research/pets2008.pdf. Since then, we've seen the advent of app stores, account-based autoupdates, Dropbox, iCloud, things like Ubuntu's "Spotlight" search, and many many more chatty things. Not to mention the web browser tracking problem, of course. The problem with naively shoving all of this stuff over Tor is that Tor Exit nodes (and services watching for long-term Exit IP usage correlation) can see that user "AnonymousDissident1" really is the same as "frank.grimes.sf.ca@gmail.com" who has a Dropbox account that he paid with his credit card. This may not be a problem for many people, but statements like "The anonabox uses Tor to allow anyone to access the Internet anonymously without having to install any software" and "The result is strong, secure anonymity. Using the anonabox hides your location, as well as all the other personal data that leaks through ordinary Internet use" are really not something you can claim if you are operating in this way. Location in particular will probably still leak all over the place due to chatty apps you have installed that broadcast it happily. All of that said, I immediately followed this bad news up with an offer to the 'torrouter' IRC nick that I would be happy to work with them to design a secure pairing system between Tor Browser and a Tor router, such that if you were using Tor Browser, it could get configuration information from your Tor router so that it used it as an upstream proxy, or such that Tor router would then install a firewall that only allowed certain Tor Guard/bridge IPs through. In this mode, the Tor router could actually act as a defense-in-depth mechanism that would block all non-proxied traffic, providing additional protection against browser or other remote exploits, by only allowing properly Tor-configured application traffic to exit onto the Tor network. I imagine the same sort of mechanism could also be used to provide defense-in-depth for OrBot+OrWall+Android and Tails users. Unfortunately, the 'torrouter' nick stopped talking to me at this point. I'm not sure if they just didn't want to put in this extra work, or were intimidated by how much work this, or what. Granted this would not be trivial to implement, but the offer to help come up with a design for it still stands, though. We can figure out the implementation and development cost sharing details after we have a good design. I'm not sure such a thing could be designed and implemented by their January 2015 rollout date goal, but I suspect they're going to struggle to meet that anyway with this much interest. I wished they'd actually talked to us about this earlier, instead of ignoring my offer then. As a result of their claims not matching up to reality, I've been debating writing a blog post warning about the various issues with Anonabox, but that seemed premature at this point, too. I suppose it still may come to that, though, if they keep ignoring us and making extreme, unsubstantiated, and inaccurate claims, especially with our trademark and logo plastered on the thing, as if it were an endorsement, or even our product. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Fwd: [tor-talk] please advise on renting a gigabit capable dedicated server
Libertas: > Hi tor users, my coworkers and I are considering getting together to > run a gigabit exit relay and are curious if you all have advice as to > the best place to go shopping for a server with 1gbps dedicated > bandwidth in a location that is helpful to the network. Someone on irc > pointed me to this list, but I'm happy to ask on another if it would > be more appropriate. Thanks in advance! Some friends and I used to run a 1GBit Reduced Exit[1] in the US at Applied Operations[2] for $800/mo, which included hardware rental. Not sure if that deal is still available, but they were Tor-friendly. 1. https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy 2. http://www.appliedops.net/. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] [tor-talk] Quantum Insert detection for everyone
I'm being a jerk and cross-posting to tor-relays, because I want to make sure that relay operators are aware of the differences in the Snort vs HoneyBadger approach. Chris Dagdigian: > > I run a US-based exit node and would be interested in a way to run > this software without compromising the users exiting my node. > Looking forward to your additional writeups - especially anything > geared towards exit nodes and quantum insert detection. I too look forward to David's writeup! For what it's worth, I think HoneyBadger is likely to be safer for exits, more comprehensive, more accurate, less noisy, and more high performance than a Snort-based solution. HoneyBadger is focused only on this particular attack and is written in golang, whereas Snort has tons of rules for everything and is written in C. This means that HoneyBadger will have a much smaller vulnerability surface and should be much harder to directly exploit than Snort. Since we're talking about detecting and capturing attacks from well funded state/world-class adversaries here (wow, what a world), vulnerability surface minimization and general memory safety are top priority. Snort is also vulnerable to tailored attacks designed to flood its logs and/or avoid detection. Snort is particularly susceptible to missing stateful attacks designed to subvert its stateless rule-based approach to detection. Several types of TCP injection attacks that rely on TCP reassembly will likely fall into this category (type 4 in: https://honeybadger.readthedocs.org/en/latest/#tcp-injection-attacks). HoneyBadger also appears to have better logging options than the Snort rules. David has been in contact with malware researchers who were quite insistent that to properly analyze 0day, a single evilpacket is very likely to be insufficient -- context is essential, especially if the attacker wants to obfuscate the attack or otherwise avoid exploit extraction. Hence the need to provide optional full-take and rolling logging options that make it easier to extract the full TCP stream of a tampered connection, as well as related concurrent traffic (such as a stream from a related HTTP redirect to an ephemeral URL). I've been talking with David about ways to place these logs on a ramdisk or an ephemerally encrypted partition, so that when detailed logs are needed, they can be handled as safely as possible. > >David Stainton <mailto:dstainton...@gmail.com> > >April 22, 2015 at 2:41 PM > >Greetings, > > > >Did you all see this Wired article about Quantum Insert detection? > > > >https://www.wired.com/2015/04/researchers-uncover-method-detect-nsa-quantum-insert-hacks > > > >These TCP injection attacks are used by various entities around the > >world (not just NSA!) to target individuals for surveillance or > >perhaps to add their computers to a botnet for other purposes. > > > >If you do not use a VPN or Tor you can run "Quantum Insert" detection > >on your computer and detect when you receive an attack attempt. > >However be advised that proper sandboxing is important here because > >intrusion detection and protocol anylsis tools are notoriously > >insecure and get pwned all the time. > > > >If you are a Tor exit relay operator you have the options of running > >detection software; However you should not publish the results > >publicly without mixing in some noise or your published data might > >make it possible for some adversaries to deanonymize Tor users. If > >your country has strict telecommunications laws then it might only be > >legal for you to perform this type of detection if you do not perform > >logging. > > > >For the past several months... in my free time I've been slowly > >developing a very comprehensive TCP injection attack detection tool > >called HoneyBadger: > > > >https://github.com/david415/HoneyBadger > > > >Quantum Insert is a NSA codeword for "TCP injection attack", however > >either of these terms are too vague. During my research I was able to > >classify 4 different types of TCP injection attack. When I say that > >HoneytBadger is comprehensive what I mean is that Honeybadger can > >detect ALL of these types of TCP injection attack types... I describe > >them briefly here: > > > >https://honeybadger.readthedocs.org/en/latest/ > > > >Here's the Fox-IT blog post about their Quantum Insert detection software: > >http://blog.fox-it.com/2015/04/20/deep-dive-into-quantum-insert/ > > > >I am going to work on writing a much more comprehensive blog post; it > >will be filled with gory technical details AND it will include > >information on how to use HoneyBa
Re: [tor-relays] Fwd: [Site5 #TZZN-12908]: DMCA Complaint: mybox.ganton.ca
listed above is a violation of the U.S. Copyright Act, 17 U.S.C. 106. In > this regard, request is hereby made that you and all persons using this > account immediately and permanently cease and desist from unauthorized > copying and/or distribution of the Work. > > CEG informs you that you may be held liable for monetary damages, > including court costs and/or attorney fees if a lawsuit is commenced > against you for unauthorized copying and/or distribution of the Work > listed above. You have until Thursday, July 23, 2015 to access the > settlement offer and settle online. To access the settlement offer, > please visit https://www.copyrightsettlements.com/ and enter Case #: > U145963094 and Password: usp7k. To access the settlement offer directly, > please visit https://www.copyrightsettlements.com/?u=U145963094&p=usp7k > > Settlement Information: > Direct Settlement Link: > https://www.copyrightsettlements.com/?u=U145963094&p=usp7k > Settlement Website: https://www.copyrightsettlements.com/ > Case #: U145963094 > Password: usp7k > > If you fail to respond or settle within the prescribed time period, the > above matter may be referred to attorneys representing the Work's owner > for legal action. At that point the original settlement offer will no > longer be an option, and the settlement amount will increase significantly. > > Nothing contained or omitted from this correspondence is, or shall be > deemed to be either a full statement of the facts or applicable law, an > admission of any fact, or waiver or limitation of any of the Two Out of > Ten Productions Inc's rights or remedies, all of which are specifically > retained and reserved. > > The information in this notice is accurate. CEG has a good faith belief > that use of the material in the manner complained of herein is not > authorized by the copyright owner, its agent, or by operation of law. > CEG and the undersigned declare under penalty of perjury, that CEG is > authorized to act on behalf of Two Out of Ten Productions Inc. > > Sincerely, > > Ira M. Siegel, Esq. > Legal Counsel > > CEG TEK International > 8484 Wilshire Boulevard, Suite 515 > Beverly Hills, CA 90211 > > Toll Free: 877-526-7974 > Email: supp...@cegtek.com > Website: www.copyrightsettlements.com]]> > > - End ACNS XML > > This is an automated email. If you have questions or concerns, please > visit us at http://www.copyrightsettlements.com/contact_us.html. Replies > sent to d...@cegtek.com are not read. > > Please review the following complaint and follow up with us to let us > know the steps you intend to take to resolve this. Failure to follow up > within that time may lead to suspension of your account.Please note that > under the Digital Millennium Copyright Act, you have the right to file a > counter-notice claiming that either > > (a) that the Claimant is wrong and that the Infringing Material is > lawfully posted on the Web Site or > (b) that the Infringing Material has been misidentified. > > You can use this online form at > http://support.scribd.com/entries/22993-DMCA-counter-notification-template > which is an editable template containing all the required elements for > counter-notification which you can send back to us. > > Thanks, > > Carl B. > Abuse Administrator, Site5.com > > Ticket Details > === > Ticket ID: TZZN-12908 > Department: Level 2 Support > Status: Awaiting Customer Reply > Backstage Login > <http://mandrillapp.com/track/click/14822339/backstage.site5.com?p=eyJzIjoiV0dmeEt4NFRjcGZwU2tMVlNzZ016d0RZUEFJIiwidiI6MSwicCI6IntcInVcIjoxNDgyMjMzOSxcInZcIjoxLFwidXJsXCI6XCJodHRwOlxcXC9cXFwvYmFja3N0YWdlLnNpdGU1LmNvbVwiLFwiaWRcIjpcIjEwOTljMmE1MWVkMDQ5YjViMzkxZDVjODA1ZTNhOTM1XCIsXCJ1cmxfaWRzXCI6W1wiZjkzNjUxMGE2YWRmNTJlZDkxNjU3NTg2YjU2YzViMWZiY2E3ODYzMVwiXX0ifQ> > | Service Notices > <http://mandrillapp.com/track/click/14822339/forums.site5.com?p=eyJzIjoiOG95VEtHWXlMRl9CUlVmRzVBV1R6VFR3RHMwIiwidiI6MSwicCI6IntcInVcIjoxNDgyMjMzOSxcInZcIjoxLFwidXJsXCI6XCJodHRwOlxcXC9cXFwvZm9ydW1zLnNpdGU1LmNvbVxcXC9mb3J1bWRpc3BsYXkucGhwP2Y9NFwiLFwiaWRcIjpcIjEwOTljMmE1MWVkMDQ5YjViMzkxZDVjODA1ZTNhOTM1XCIsXCJ1cmxfaWRzXCI6W1wiNDE2ZTUxYTZkNTgyYjA1YzBjNTc3OWIzN2E3MjEzMDRiNTQxOWY4Y1wiXX0ifQ> > | Support Center > <http://mandrillapp.com/track/click/14822339/www.site5.com?p=eyJzIjoiR3JhX3NXODBWekprdS1xeDNZMWM5eGNyOHI4IiwidiI6MSwicCI6IntcInVcIjoxNDgyMjMzOSxcInZcIjoxLFwidXJsXCI6XCJodHRwOlxcXC9cXFwvd3d3LnNpdGU1LmNvbVxcXC9zdXBwb3J0XFxcL1wiLFwiaWRcIjpcIjEwOTljMmE1MWVkMDQ5YjViMzkxZDVjODA1ZTNhOTM1XCIsXCJ1cmxfaWRzXCI6W1wiMTkyZjc0YzY1NzAyMDFlMGJiOWY0MzQ1MDNkOTFlZmM0ZDJhYTI3ZFwiXX0ifQ> > | Knowledge Base > <http://mandrillapp.com/track/click/14822339/kb.site5.com?p=eyJzIjoiNk1ITGJJd3NSc2IySGNNQzlmNTkwNXpPUmd3IiwidiI6MSwicCI6IntcInVcIjoxNDgyMjMzOSxcInZcIjoxLFwidXJsXCI6XCJodHRwOlxcXC9cXFwva2Iuc2l0ZTUuY29tXCIsXCJpZFwiOlwiMTA5OWMyYTUxZWQwNDliNWIzOTFkNWM4MDVlM2E5MzVcIixcInVybF9pZHNcIjpbXCI4ZDBlNzhmZWFlNjM5Yjk2MTc2NDdkNzMyOGQyMTBkNzkxZTVjNWExXCJdfSJ9> > > > > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Ports 465 and 587 vanished from reduced exit policy?
It appears that some years ago someone quietly removed port 465 and 587 from the reduced exit policy at https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy without an explanation. I've added them back in, since these ports should only be used for user-authenticated SMTP, and not spam. Has anyone experienced any abuse from these ports that involved non-authenticated mail/spam? Otherwise, it seems that exit operators who were using the reduced exit policy should consider updating their polices to include these ports. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] BWauth no-consensus state in effect
Roger Dingledine: > On Thu, Jul 30, 2015 at 08:53:33PM +0200, nusenu wrote: > > Has this fallback happened before (=some experience on the potential > > impact available) or is this outage happening for the first time since > > the bwauths are in place? > > Indeed, it happened a few times back in 2010-2011 when we were > first rolling out the bwauths: > https://metrics.torproject.org/torperf.html?graph=torperf&start=2009-05-06&end=2015-08-04&source=all&filesize=50kb > but it's been mighty stable since then. > > Interestingly, you'd expect a big bump in torperf response times > when we switched to self-advertised weights. But there isn't one: > https://metrics.torproject.org/torperf.html?graph=torperf&start=2015-07-06&end=2015-08-04&source=all&filesize=50kb Some years ago, Karsten made a nice overlay of bw auth failures to torperf data, which allowed us to see a 4-5X reduction in torperf fetch times while the bw auths were active vs not. I just added a comment to https://trac.torproject.org/projects/tor/ticket/16696#comment:3 asking Karsten to repeat this visualization. That might be a bad place to do it, though. Might need a new ticket (or tickets)? > I'm guessing this is because we have enough relays, with enough capacity, > to handle the current load adequately. I'm wondering if the downtime in this case hasn't happened for a long enough stretch of consecutive consensus periods for it to impact the network. That might be one explanation. Spare capacity might be another. > But that doesn't mean the current relays are useless. Historically > speaking, it means pretty soon more users will show up, once word gets > out that Tor isn't as slow as it used to be. :) I worry that the "capacity economics" involved here might not have the same properties as they used to, and that we might not see increased adoption as a result. In particular, the switch to one guard makes it much more likely that a new user will pick a slow guard and this will cause them to have a horrible Tor experience. With three guards, you had a much better chance to get at least one fast guard in that case, and then CBT could sort out which was which. In my case, I usually pick my guards manually for ease of use with my firewall, and as a result performance is always very fast with the three fast guards I have chosen (I often get throughput 750k-1MB/sec for large transfers). In some instances where I have not selected my guards manually, Tor Browser is unbearably slow. Like really, really painfully slow. The whole time. Until I reinstall it. This makes me think that the performance of people who pick guards from the tail is much much worse than the performance of the top guards, and this property likely is acting as a deterrent to adoption. If even a small fraction of users perceive Tor as totally unusable, the word of mouth is still going to be "OMG Tor is like SO UNUSABLY SLOW!!" So long as this keeps happening, I suspect it is unlikely for people to rush to Tor because it is now faster. I think once we expect most of the clients to have switched to 1 guard, we should get some torperf graphs going for guards of various capacities, and see what the actual effects of this are. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
Sharif Olorin: > > I would expect most US universities to be logging netflow in the very > > least. Even if the Tor operator isn't keeping logs, it seems safe to assume > > the network operator is. > > I'd be surprised if it was different for non-US universities - I'd > expect this to be the case for every university with its own AS, and > probably most without. It's not specific to universities either; it > would be a rare ISP that doesn't retain netflow for traffic accounting > purposes. It's often somewhat aggregated, but to varying degrees - the > last such system I worked on was designed to retain indefinitely at > sub-minute granularity for training/crossvalidation of network anomaly > detection. Green & Sharif (& any others with direct netflow experience) - At what resolution is this type of netflow data typically captured? Are we talking about all connection 5-tuples, bidirectional/total transfer byte totals, and open and close timestamps, or more (or less) detail than this? Are timestamps always included? Are bidirectional transfer bytecounts always included? Are subsampled packet headers (or contents) sometimes/often included? What about UDP sessions? IPv6? I think for various reasons (including this one), we're soon going to want some degree of padding traffic on the Tor network at some point relatively soon, and having more information about what is typically recorded in these cases would be very useful to inform how we might want to design padding and connection usage against this and other issues. Information about how UDP is treated would also be useful if/when we manage to switch to a UDP transport protocol, independent of any padding. Thanks a bunch! -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
grarpamp: > On Wed, Aug 12, 2015 at 7:45 PM, Mike Perry wrote: > > At what resolution is this type of netflow data typically captured? > > > > Are we talking about all connection 5-tuples, bidirectional/total > > transfer byte totals, and open and close timestamps, or more (or less) > > detail than this? > > All of the above depends on which flow export version / aggregation you > choose, until you get to v9 and IPFIX, for which you can define your fields. > In short... yes. > > But consider looking at average flow lifetimes on the internet. There may > be case for going longer, bundling or turfing across a range of ports to > falsely > trigger a record / bloat, packet switching and so forth. This interests me, but we need more details to determine what this looks like in practice. I suspect that this is one case where the switch to one guard may have helped us. However, Tor still closes the TCP connection after just one hour of inactivity. What if we kept it open longer? Or what if the first hop was an encrypted UDP-based PT, where it was not clear if the session was torn down or closed? > > recorded in these cases would be very useful to inform how we might want > > to design padding and connection usage against this and other issues. > > "Typical" is really defined by the use case of whoever needs the flows, > be it provisioning, engineering, security, operations, billing, bigdata, etc. > And only limited by the available formats, storage, postprocessing, > and customization. IPFIX and "Typically", I appreciate your answers grarpamp. They're "typically" correct, but sometimes they have more flavor than I'm looking for, and in this case I am worried it may end up silencing the people I'd really like to hear from. I want real data from the field, here. Not speculation on what is possible. > > I think for various reasons (including this one), we're soon going to > > want some degree of padding traffic on the Tor network at some point > > relatively soon > > Really? I can haz cake nao? Or only after I pump in this 3k email and > watch 3k come out the other side to someone otherwise idling ;) You can say that, but then why isn't this being done in the real world? The Snowden leaks seem to indicate exploitation is the weapon of choice. I suspect other factors are at work that prevent dragnet correlation from being reliable, in addition to the economics of exploits today (which may be subject to change). These factors are worth investigating in detail, and ideally before the exploit cost profiles change. As such, I still look forward to hearing from someone who has worked at an ISP/University/etc where this is actually practiced. What is in *those* logs? Specifically: Can we get someone (hell, anyone really) from Utah to weigh in on this one? ;) Otherwise, the rest is just paranoid speculation, and bordering on trolled-up misinformation. :/ -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
Sharif Olorin: > > At what resolution is this type of netflow data typically captured? > > For raw capture, timestamps are typically second-resolution. The > resolution post-aggregation is a different question. Keep in mind that > netflow is just the most common example; many networks don't use Cisco > netflow, but have something that meets the same requirements, > storing relatively more or less data (e.g., pmacct, bro). > > > Are we talking about all connection 5-tuples, bidirectional/total > > transfer byte totals, and open and close timestamps, or more (or less) > > detail than this? > > That's about right; some systems (e.g., pmacct in some configurations) > store a four-tuple of (src,dest,tx,rx) while throwing out the > ports and aggregating over the tx and rx flows such that connections > can no longer be uniquely identified. What's stored from Cisco netflow > is quite flexible[0]. Other systems like bro default to storing > one record per connection, with all the information in a five-tuple > plus things like IP TOS and byte counts. > > > Are timestamps always included? > > Yes, to some granularity (there's not much point in storing connection info > without times, for any of the reasons people normally store connection > info). The most recent system I set up (bro) records connections with > second-precision timestamps; the one before that (pmacct) stored > aggregates over ten seconds (src,dest,tx,rx). So in the bro-based system (which sounds higher resolution) the final logged data was second-precision timestamps on full connection tuples? So if I have a connection to a Tor Guard node opened for 8 hours, at the end of the session, your system would record a single record with: (my_ip,my_port,guard_ip,guard_port,tx,rx,timestamp_open,timestamp_close) Or would it record 8*60*60 == 28800 records, with one record stored per second that the connection was open/active? > > I think for various reasons (including this one), we're soon going to > > want some degree of padding traffic on the Tor network at some point > > relatively soon, and having more information about what is typically > > recorded in these cases would be very useful to inform how we might want > > to design padding and connection usage against this and other issues. > > arma or others can probably explain why this is a hard problem; I > don't know enough in this area to comment. I think any system that is storing connection-level data (as opposed to one record per timeslice of activity on a tuple) is likely to be rather easy to defend against correlation. I also think that systems that store only sampled data will also be very easy to defend against correlation. Murdoch's seminal IX-analsysis work required 100-500M transfers to get any accuracy out of sample-based correlation at all, and even then the false positives were a serious problem, even when correlating a small number of connections. We have a huge problem right now where all of the research in this area claimed extremely effective success rates, and swept any mitigating factors under the rug (especially false positives and the effects of large amounts of concurrent users or additional activity). > > Information about how UDP is treated would also be useful if/when we > > manage to switch to a UDP transport protocol, independent of any > > padding. > > I don't think UDP helps you at all here. What makes you think it might? Well, it seems harder to store a full connection tuple for open until close, because you have no idea when the connection actually closed (unless you are recording a tuple for every second during which there is any activity, or similar). -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
grarpamp: > On Thu, Aug 13, 2015 at 3:40 AM, Mike Perry wrote: > > However, Tor still closes the TCP connection after just one > > hour of inactivity. What if we kept it open longer? > > The exporting host has open flow count limited by memory (RAM). > A longer flow might be forced to span two or more records. > The "flags" field of some tools and versions may not mark > a SYN seen in records 2+, the rest of tuple would stay same. > Active timeout gives periodic data on longer flows, typically > retaining start time but implementations can vary on state. > > Here's an early IOS 12 default... > Active flows timeout in 30 minutes (1~60) > Inactive flows timeout in 15 seconds (10~600) This is helpful. To clarify, when a record is split due to timeout, a new record will have the start end end timestamps for the new flow? Do collectors tend to recombine these split flows? Otherwise, from these defaults, it sounds like Tor's one hour timeout on client TLS connections seems reasonable, and perhaps not worth raising, since even if we were using padding and keep-alives, the flow data would still record a fresh byte count record + timestamp every 30 minutes? > > As such, I still look forward to hearing from someone who has worked at > > an ISP/University/etc where this is actually practiced. What is in > > *those* logs? > > The questions were of a general "intro to netflow" nature, thus > the links, they and other resource describe all the data fields, > formation of records, timeouts, aggregation, IPFIX extensibility, etc. > Others and I on these lists know what "360 gigs" of netflow looks like. Well, right, then. Let's get to the meat of it. > *What* specific info are you looking for beyond that? I am looking to understand what "360 gigs" aka "(3.2 billion records)" of netflow over 3 months looks like, and also if we can expect this to be standard practice, somewhat outside the norm, or indicative of someone who has specifically tuned their netflow config to attack Tor (should the opportunity arise). Assuming the boingboing comment is accurate, and it's just one exit IP, then we're probably looking at two exits worth of data (either UtahStateExit0+UtahStateExit1, or UtahStateExit2+UtahStateExit3). Each of these exit pairs appears to have averaged a little over 10Mbit/sec sustained over the most recent 3 month period according to https://globe.torproject.org. The exits are running some version of the Reduced Exit Policy, so there should be no bittorrent traffic. Likely mostly web traffic by connection count, and probably even byte count. In three months, there are 7,776,000 seconds. So we're looking at 441 records per second in this dataset. For 10Mbit/sec worth of sustained web traffic, that sounds about connection-level resolution to me. Do you agree? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
Sharif Olorin: > Mike, > > Additionally, I should clarify that bro and netflow have some > fundamental differences and are usually used for different things (but > both are common in large networks). Bro's very stateful and is more > focused on IDS-type applications, whereas netflow is more directed > towards traffic accounting, which is why bro has all the stateful > stuff about TCP connections. bro would be more commonly found at > a university, but netflow's probably more relevant if you're looking > at what the typical ISP will retain for a long time. Yes, unfortunately this is why "just set up bro/netflow at home and try it!" is not really helpful. It is obvious that these systems can in theory be configured to log+analyze all data for all time, especially if it is just my tiny DSL line with one person browsing the web over Tor and I have a few TB worth of disk to burn. However, speculation about the evil BOFH who twiddles his mustache and tunes netflow to deanonymize all Tor users forever is rather boring to me. It's a scenario that's unlikely to happen at scale, or be practical for full analysis of the entire Tor network. Even if we are looking at such a BOFH in the Utah case, we have yet another datapoint against the evil BOFH correlation theory: These logs were useless! The important question to me is: "If we assume honest Tor nodes, what level of logging is likely to be practiced at their ISP or AS today without their knowledge, and what technical measures are available to us to reduce that potential impact?" In this Utah exit case, the exit operator in question is indeed honest, and we're looking at an upstream admin who just happened to be logging stuff, likely as per some standard (if heavy-handed) connection-level logging policy. I suspect that type of adversary will be possible to defeat with similar amounts of padding that will defeat hidden service circuit setup fingerprinting, website traffic fingerprinting, traffic type classification, and a host of other low-resource attacks... -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
Mike Perry: > grarpamp: > > The questions were of a general "intro to netflow" nature, thus > > the links, they and other resource describe all the data fields, > > formation of records, timeouts, aggregation, IPFIX extensibility, etc. > > Others and I on these lists know what "360 gigs" of netflow looks like. > > Well, right, then. Let's get to the meat of it. > > > *What* specific info are you looking for beyond that? > > I am looking to understand what "360 gigs" aka "(3.2 billion records)" > of netflow over 3 months looks like, and also if we can expect this to > be standard practice, somewhat outside the norm, or indicative of > someone who has specifically tuned their netflow config to attack Tor > (should the opportunity arise). > > Assuming the boingboing comment is accurate, and it's just one exit IP, > then we're probably looking at two exits worth of data (either > UtahStateExit0+UtahStateExit1, or UtahStateExit2+UtahStateExit3). > > Each of these exit pairs appears to have averaged a little over > 10Mbit/sec sustained over the most recent 3 month period according to > https://globe.torproject.org. The exits are running some version of the > Reduced Exit Policy, so there should be no bittorrent traffic. Likely > mostly web traffic by connection count, and probably even byte count. > > In three months, there are 7,776,000 seconds. So we're looking at 441 > records per second in this dataset. > > For 10Mbit/sec worth of sustained web traffic, that sounds about > connection-level resolution to me. Do you agree? (Yay! Thinking once and posting two posts at once to three different lists. I'm like some kind of Internet champion! ;) I think I needed to do one more division. This is roughly one record per 3KB of traffic (which I think you alluded to earlier). Rather high if we expect this to be web traffic, even if there was only 1 web request per connection. So then, what is the most likely configuration that would generate this many records? Is it indeed likely to be some BOFH scenario, or might there be some common (if half-insane) policy that ends up producing this many records? Here's Globe for UtahStatExit2 and 3 for easy access: https://globe.torproject.org/#/relay/B4E641BC42DDB6FD2526CFF80504AB5221B0EB82 https://globe.torproject.org/#/relay/7E4E1CC167300932F05AC70ECD2B9A298732C6E2 The bandwidth histories have no current data, but you can click on the 3 month tab to get the numbers I used. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
grarpamp: > On Thu, Aug 13, 2015 at 3:40 AM, Mike Perry wrote: > >> But consider looking at average flow lifetimes on the internet. There may > >> be case for going longer, bundling or turfing across a range of ports to > >> falsely > >> trigger a record / bloat, packet switching and so forth. > > > > This interests me, but we need more details to determine what this looks > > like in practice. > > NANOG list could link specific papers regarding nature of the internet. > The various flow exporters have sensible default timeouts tend cover > that ok for purposes intended. > > > I suspect that this is one case where the switch to one guard may have > > helped us. > > In that various activities such as ssh, browsing, youtube, whatever > are confined to being multiplexed in one stream, that makes sense. > > > However, Tor still closes the TCP connection after just one > > hour of inactivity. What if we kept it open longer? > > The exporting host has open flow count limited by memory (RAM). > A longer flow might be forced to span two or more records. > The "flags" field of some tools and versions may not mark > a SYN seen in records 2+, the rest of tuple would stay same. > Active timeout gives periodic data on longer flows, typically > retaining start time but implementations can vary on state. > > Here's an early IOS 12 default... > Active flows timeout in 30 minutes (1~60) > Inactive flows timeout in 15 seconds (10~600) > > Also consider what is wished to hide, big iso download, > little http clicks, start time of some characteristic session > rippling across or appearing at edges, active data pumping attack. > And what custom flowish things and flow settings an adversary > might be doing to observe those. Traditional netflow seems > useful as idea base to form a better heuristic analysis system. I submitted a proposal to tor-dev describing a simple defense against this default configuration: https://lists.torproject.org/pipermail/tor-dev/2015-August/009326.html I'm also working on an implementation of that defense: https://trac.torproject.org/projects/tor/ticket/16861 Anyone with netflow experience should feel free to chime in there (or here if you are not subscribed to tor-dev), but please be mindful of the adversarial considerations in section 3 (unless you believe that adversary model to be invalid, but please explain why). -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
grarpamp: > On Fri, Aug 21, 2015 at 12:30 AM, Mike Perry wrote: > > I submitted a proposal to tor-dev describing a simple defense against > > this default configuration: > > https://lists.torproject.org/pipermail/tor-dev/2015-August/009326.html > > nProbe should be added to the router list, it's a very popular > opensource IPFIX / netflow tap. > http://www.ntop.org/products/netflow/nprobe/ While ntop is FLOSS, nProbe itself seems to be closed source. There's a FAQ on the page about it. As such, I was only able to discover that its default inactive/idle timoeut is 30s. I couldn't find a range. > For those into researching other flow capabilities... > There are also some probes in OS kernels and > some other opensource taps, they're not as well known > or utilized as nProbe. > Other large hardware vendors include Brocade, Avaya, > Huawei, and Alcatel-Lucent. Out of all of these, I was only able find info on Alcatel-Lucent. It uses cflowd, which appears to be a common subcomponent. It's timeout ranges are the same as Cisco IOS. What I really need now is any examples of common routers that have a default inactive/idle timeout below 10s, or allow you to set it below 10s. So far I have not found any. > Lots of SDN and monitoring projects can plug in > with gear like this, because, FTW... > > http://telesoft-technologies.com/technologies/mpac-ip-7200-dual-100g-ethernet-accelerator-card > http://www.hitechglobal.com/IPCores/100GigEthernet-MAC-PCS.htm > http://www.napatech.com/sites/default/files/dn-0820_nt100e3-1-ptp_data_sheet_3.pdf > https://www.cesnet.cz/wp-content/uploads/2015/01/hanic-100g.pdf > http://www.ndsl.kaist.edu/~kyoungsoo/papers/2010-lanman-100Gbps.pdf > http://info.iet.unipi.it/~luigi/netmap/ I think these devices are wandering into the "adversarial admin" territory (see section 3 of the proposal). I want to focus on the case where the adversary demands/sniffs/exploits routers likely to be installed in most networks. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Tor relays without AESNI
Sebastian Hahn: > Hi, > > > On 16 Sep 2015, at 05:22, nobody wrote: > > Hi all, > > > > I am looking at renting a dedicated server on a unmetered 100 Mbit/s > > connection, but the CPU is a Intel G850, which is old (Q2 2011) and does > > not have AES-NI. Will this CPU be too slow to make use of the bandwidth? > > I'm currently running pushing between 14 and 25MB/s each direction on > a machine that doesn't have aesni using two separate Tor processes. Each > uses one core up to a maximum of around 80%. > > CPU is Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz. If this is 14-25 Megabytes/sec per core (and corresponding tor process), then this is also consistent with what I remember. Without AES-NI: ~100Mbit per core. With AES-NI: over 300Mbit per core. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] HoneyPot?
Green Dream: > Mirimir: aside from the nickname, do you have any reason to believe it was > out of the ordinary? The exit policy mostly only seems to allow > non-encrypted services (80 but not 443, 143 A while ago we were actively marking nodes that only allowed non-encrypted services as BadExit, since there were no satisfactory explanations given as to why nodes should need this policy. Back then, the most common explanation people gave was "I need the ability to block traffic that looks evil." Unfortunately, all mechanisms available to do this will also end up blocking legitimate content at some rate. Nobody was using anything more advanced than snort-style regular expressions that matched things that happened to look like exploits. FWIW, I am personally in favor of reinstating such a policy. I doubt the situation has changed. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Sorry, HotMail users, you're rejected
Thus spake Steve Snyder (swsny...@snydernet.net): > Got another threatening e-mail from my ISP today, prompted by another SpamCop > complaint regarding spam run through HotMail. HotMail records the address of > the originating server and that, again, is my exit node. > > So I have to curtail exit access to HotMail. Yeah, it sucks, but I know of > no way to block the sending of webmail while still allowing it to be > retrieved. Make sure this is done via exit policy and not iptables or DNS filter. Also, are you sure you have the whole hotmail netblock? -- Mike Perry Mad Computer Scientist fscked.org evil labs pgph4ZHa0Zjka.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Sorry, HotMail users, you're rejected
Thus spake Steve Snyder (swsny...@snydernet.net): > OK, I'm backing down from this after discovering the huge number of > IP addresses associated with *.hotmail.com and *.live.com names. > > >They understood entirely > > Must be nice. My ISP, on being informed that I run a Tor exit node, > informed me that I was responsible for all traffic leaving my server > and that they would cancel my account if I didn't take stops to > solve the problem. The response here is to tell them you accept full legal responsibility for the traffic. The reality is that your actual legal responsibility is zero in most countries, but it might make them feel better that you acknowledge it will be you proving that in court, not them. -- Mike Perry Mad Computer Scientist fscked.org evil labs pgpqhUGZ8U6Ao.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Over Thanksgiving and into early this week, we ran an experiment to test a feedback mechanism to attempt to allocate usage of the Tor network such that the measured stream capacities through all relays became equal: https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/BwAuthority/README.spec.txt#l354 This experiment failed in 3 ways: 1. It drove many relays down to 0 utilization. Scott Bennett noted this in his post to tor-relays, and at least one 10Mbit relay operator also commented on the traffic drop-off of their node in #tor. 2. It only created one PID 'setpoint' for the entire network, even though different types of nodes see different load characteristics, and despite it being impossible to shift load from an Exit node to a Middle node, for example. 3. It kept allocating bandwidth to some relays (especially Middle and non-default-policy Exits) until they hit INT32_MAX in the consensus, and everything finally exploded. We then shut off the feedback by removing the consensus parameters. I've made five major changes to try to address these issues: 1. Don't perform multiple rounds of negative feedback for slow nodes. 2. We now group nodes by their flags into four categories (Guard, Middle, Exit, and Guard+Exit), and compute a different PID setpoint for each class. 2. Circuit failure now counts more. Circuit failure is our CPU overload signal, as nodes that hit CPU overload being dropping onionskins and failing extends. Instead of using the circuit success rate as a multiplier against the pid_error, we now actually compute a circ_error similar to the pid_error, and use it as the pid_error if it is more negative. We also now set FastFirstHopPK 0 to ensure that Guard nodes also get tested for circuit failure. 3. Raised the PID setpoint slightly, which should prevent us from piling quite so much weight onto fast relays. 5. Cap feedback via a consensus parameter. All of these changes are governed by consensus parameters. See: https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/BwAuthority/README.spec.txt#l481 for more details. The parameters governing feedback are bwauthnsbw=1 and bwauthti. So long as one or both of these are present and non-zero in the consensus parameter list, the feedback experiment is active. We'll probably be running this next experiment for about a week (or perhaps longer if it doesn't explode and seems to improve performance on https://metrics.torproject.org/performance.html) starting tonight or tomorrow. Please keep an eye on your relays and tell us if anything unexpected happens over the next week or so. Thanks! -- Mike Perry pgpwKXGhGFtO5.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Sebastian Urbach (sebast...@urbach.org): > Am Sat, 3 Dec 2011 18:57:22 -0800 > schrieb Mike Perry : > > Hi, > > > I've made five major changes to try to address these issues: > > happens over the next week or so. > > Well, somebody has to say it and it seems to be me. The second try is > also a complete bust. Since your post the performance is getting worse > and worse every day, though my relays behaviour is pretty normal. Yes, I have noticed the huge gain in the metrics.tp.o graph. > Any conclusions from the second try Mike ? I do not fully understand the cause of it yet, but I did find a rather nasty bug in the treatment of Guard nodes, where we were not properly using the "bwauthmercy" consensus param for them, and were punishing slow guards through multiple rounds of negative feedback. This could skew the metrics data, and could alter the flow of traffic through the rest of the network, but there may be other issues at work, too :/. I am going to wait until the Guard numbers recover from the bug, which should take 24-48hrs, and then dig deeper next week if the metrics numbers still persist to be bad. Please ping back then if your relays still appear to be underloaded (or overloaded to the point of emitting warns about having too many circuit creation requests in the log). -- Mike Perry pgpiJ2tzfM1Sw.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Sebastian Urbach (sebast...@urbach.org): > Seems to get better in the last hours ... > > I want to suggest strongly a change for the metrics / performance site. > The displayed default size is 50KB and should be changed to 1MB. 50 KB > ist out of touch with reality for any service i can think of. Could you > please encourage your colleagues to do that ? We want to optimize primarily for the low-latency web case, not bulk downloading. The one exception to this is Youtube/user-generated video, which I agree does fall more in the 1MB case. However, the state of web browser tech for user-generated video still sucks in the absence of Flash, so it is still not yet our primary focus. -- Mike Perry pgpNaqqbt2Axf.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Jon (torance...@gmail.com): > In adding further info on the topic, I have noticed and looked back > over the past couple of weeks and have seen a drop of of usage of > about 43% s of today. I am not in the higher bracket as others, but I > have been in the 3 bars bracket only up till recently. > > Don't know if this will help, but if it does, add it to the rest of > the info you have already received. What helps more is your node nick and/or idhex string. If you're not comfortable talking about that publicly, knowing if you are Guard, Exit, Guard+Exit, or just Stable (Middle) node helps. I actually do expect that this system may cause some slower nodes (especially those with capacities close to or below the network stream average of ~70Kbytes/sec) to experience less traffic. This does not mean the nodes are useless. It just means that we should try to use them infrequently enough such that when they are used, they can provide enough capacity to not be a bottleneck in a circuit. We are still waiting for the effects of the Guard bug to fully dissipate. I am cautiously optimistic that things are getting better, but we'll need to keep an eye on things for a bit longer to be sure, I suspect. -- Mike Perry pgpNDRjUdtLcC.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Tim Wilde (twi...@cymru.com): > > I try to keep everything I do documented on that wiki. All these > > servers run four instances of Tor each (one per core) and traffic > > is accounted for in total. Also, keep in mind that vnstat counts > > both incoming and outgoing traffic, so 700Mbps in vnstat are really > > only 375 per direction. > > Ah, okay, thanks for the clarification, I was thinking those numbers > were for single Tor instances. That makes me feel a lot better then, > especially with the combination of directions. :) I'm pushing around > 600Mb/sec total in+out on my piece of bit iron so I'm much closer to > the same ballpark than I thought. Thanks, and thanks again for your > documentation! Moritz, Andy, Tim, and others with Gbit+ Guards and/or Exits: Could you guys ensure you are not running into TCP socket exhaustion on any of your relays? It is a possibility, esp for Guard+Exits with gobs of CPU and gobs of throughput. I am curious if we will need to do this or not: https://trac.torproject.org/projects/tor/ticket/4709 -- Mike Perry pgp2w2GmofAI1.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Mike Perry (mikepe...@torproject.org): > Thus spake Tim Wilde (twi...@cymru.com): > > > > I try to keep everything I do documented on that wiki. All these > > > servers run four instances of Tor each (one per core) and traffic > > > is accounted for in total. Also, keep in mind that vnstat counts > > > both incoming and outgoing traffic, so 700Mbps in vnstat are really > > > only 375 per direction. > > > > Ah, okay, thanks for the clarification, I was thinking those numbers > > were for single Tor instances. That makes me feel a lot better then, > > especially with the combination of directions. :) I'm pushing around > > 600Mb/sec total in+out on my piece of bit iron so I'm much closer to > > the same ballpark than I thought. Thanks, and thanks again for your > > documentation! > > Moritz, Andy, Tim, and others with Gbit+ Guards and/or Exits: > > Could you guys ensure you are not running into TCP socket exhaustion > on any of your relays? It is a possibility, esp for Guard+Exits with > gobs of CPU and gobs of throughput. > > I am curious if we will need to do this or not: > https://trac.torproject.org/projects/tor/ticket/4709 It looks like Moritz is seeing some evidence of TCP sourceport exhaustion in his Tor logs: "[warn] Error binding network socket: Address already in use". He's also monitoring TCP connection counts on each IP interface: netstat -ntap | grep $INTERFACE_IP | wc -l It appears that right now, he's at only about ~10k connections per IP, and not experiencing any log lines at the moment. It is possible this is a transient condition caused by overly-agressive scrapers and/or torrenters who flock to the node for a short while and then move on? Reports on the recent appearance or prevelance increase of that or other warns from others will be helpful. -- Mike Perry pgpdCCewKDL9j.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Tim Wilde (twi...@cymru.com): > We're not seeing source port exhaustion, but we are seeing two warns, > one of which I haven't been able to nail down: > > 2011 Dec 13 20:22:07.000|[notice] We stalled too much while trying to > write 8542 bytes to address "[scrubbed]". If this happens a lot, > either something is wrong with your network connection, or something > is wrong with theirs. (fd 409, type Directory, state 2, marked at > main.c:990). Hrm.. Haven't seen this one before... > 2011 Dec 13 22:26:45.000|[warn] Your computer is too slow to handle > this many circuit creation requests! Please consider using the > MaxAdvertisedBandwidth config option or choosing a more restricted > exit policy. [18 similar message(s) suppressed in last 60 seconds] Ah, we should be handling this issue with the fix for #1984: https://trac.torproject.org/projects/tor/ticket/1984 > The second warn I figure I should be tuning myself with > MaxAdvertisedBandwidth, and it's happening on BigBoy, the relay on > this box that's doing the majority of its bandwidth. So I'm not sure > if it's anything that your feedback loop should be involved in or not. It's a shame this log message makes such a crazy recommendation wrt MaxAdvertisedBandwidth. But I guess some tweak is better than no tweak. Hopefully we can make this go away without you needing to lower it, though. Can you ping me on IRC if you keep getting these warns after leaving MaxAdvertisedBandwidth alone? > One other data point, I have seen (also sporadically) some indications > in my system logs of hardware hangs on the ethernet interface all of > this is running through, so I'm slightly suspicious that it's to blame > for the *Dragon problems. It doesn't really explain why BigBoy isn't > affected though, and I haven't been able to definitively prove > anything yet, so I'm just not sure. This sounds incredibly familiar. What ethernet card + driver version do you have? Some combos of are pretty abysmal about IRQ load balancing and interrupt optimizations, or at least they were on old kernels (which may still apply if you are CentOS). -- Mike Perry pgpZotrS0hGmK.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Sebastian Urbach (sebast...@urbach.org): > Around the 24 of November the performance graph reported a value below > 4 sec. per 50kb, and right now including the last few days we can > barely reach 6 sec. per 50kb. > > Or in other words, we lost way more than 50 % peformance since that > day. Unless you guys are about to really step up the game i strongly > suggest to shred the chances made in the last weeks. > > The original plan was to speed up things with the modified bandwidth > scanners. Its probably safe to say that this experiment is a > complete bust. Yeah, I agree. I think that it's now clear that both the variance and the mean of the torperf graphs are way above norm, and we don't have much other explaination for it other than the feedback experiment not working. The question is why? Intuitively, feedback makes a lot of sense. If there is a ton of spare capacity on fast nodes, why shouldn't we try to use it? Similarly, if slow nodes can't keep up with the network throughput average, clearly they are a bottleneck and should have reduced traffic directed at them until they have more spare capacity. What is causing it to break so badly on the torperf graphs, then? Is it bugs? Is it bad choice of setpoints? Is it that we are using the wrong metric? Until we have answers for at least some of these questions, I am inclined to keep playing with it. I've made the following changes this afternoon: 1. On the assumption that we're seeing the huge increase in variance on torperf because faster nodes are only *sometimes* at max capacity, but most of the time have enough spare capacity to get good measurements, I have stopped "filtering" measurements for them (ie I have stopped selecting only the fast measurements). I am still applying filtering to overly punished nodes, so that they don't get punished too much by being paired with slow peers. 2. I have changed the circ failure target from the network average to 0.0% failure. 3. I have changed the Guard feedback interval from 2 weeks to 1 week. My plan is to let these changes run for another couple days, and if they don't seem to change anything, I plan to try 1 week on, 1 week off cycles of the experiment, to see if we can detect any patterns in exactly when and why torperf starts to go south. -- Mike Perry pgpEcEG5ASzOd.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Bandwidth Authority PID Feedback Experiment #2 Starting
Thus spake Sebastian Urbach (sebast...@urbach.org): > > Yeah, I agree. I think that it's now clear that both the variance and > > the mean of the torperf graphs are way above norm, and we don't have > > much other explaination for it other than the feedback experiment not > > working. The question is why? > > I don't want to be the guy who posts always the bad news ... > > I can happily report that the load, since your last mail, increased > rapidly to max values and also the performance from the metrics page > seems to be increasing very well right now. > > If you want to take a look for yourself: > > https://metrics.torproject.org/routerdetail.html?fingerprint=0aff5440ae93f2ed679b20e543081710312b7333 > > Coincidink ;-) ? > > I dont think so ... > > I hope that this will relieve a little bit pressure from you, Mike. Just to be sure, I decided to shut off the experiment on Sunday at 3pm US Pacific time anyways. It looks like the performance has in fact gotten worse since then. Does this mean the feedback was definitely working? Who the hell knows. But it does seem clear that the transition between the two states is definitely a rocky one that requires patience and repeated trials to sniff out. I am going to leave it disabled for another week, just to see how it recovers, then turn it back on again, possibly with the addition of https://trac.torproject.org/projects/tor/ticket/4730 if I get a hot minute, but possibly without it, just to watch the transition again. Robert Ransom is quite insistent on the need for this as well: https://trac.torproject.org/projects/tor/ticket/4708, but that system doesn't really need me to build it, as it can be done independent of the bwauths. I also think #4730 is more likely to capture the problem we were seeing with fast nodes being fast only sometimes, and it will be way less work. Thanks for bearing with me, but we still might have a ways to go before we really get to the bottom of this one. I will keep you posted. -- Mike Perry pgpZpDjbckVKK.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] bandwidth scanner status ?
Thus spake Sebastian Urbach (sebast...@urbach.org): > I want to ask for an status update regarding the bandwidth scanner > project. It's been 3 weeks or so since we last heard anything. PID feedback has been off since I last notified the list about shutting it off, on Dec 19th. Some consensus params have changed due to dirauth instability, consensus method chnge, and confusion. In particular, the bwauthpid=1 param was explicitly set and unset a few times, and then set again, but the bw auths remained in their former "Section 2 compatible" mode of operation because no other feedback params were set for the duration. See https://gitweb.torproject.org/torflow.git/blob/HEAD:/NetworkScanners/BwAuthority/README.spec.txt#l477 for more info on the consensus params that alter bw auth behavior, and please read the rest of the spec if you're interested in getting more involved. -- Mike Perry Exterminate all dogma. Permit no exceptions. pgpIKbGotQ5Tk.pgp Description: PGP signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Towards a Tor Node Best Best Practices Document
/ram1 /mnt/tor-root/var/lib/tor/keys cd /mnt/tor-root/ chroot . start_tor.sh Once you start your tor process(es), you will want to copy your identity key offsite, and then remove it. Tor does not need it to remain on disk after startup, and removing it ensures that an attacker must deploy a kernel exploit to obtain it from memory. While you should not re-use the identity key after unexplained reboots, you may want to retain a copy for planned reboots and tor maintenance. scp /mnt/tor-root/var/lib/tor/keys/secret_id_key offsite_backup:/mnt/usb/tor_key rm /mnt/tor-root/var/lib/tor/keys/secret_id_key Upon suspicious reboots, you can verify the integrity of your tor image by simply calculating the sha1sum (perhaps copying the image offsite first). You do not need to do anything special with the var loopback. These steps should prevent even adversaries who compromise the root account on your system (by rebooting it, for example) from obtaining your identity keys directly, forcing them to resort to kernel exploits and memory gymnastics in order to do so. Don't forget to periodically update the libraries stored on your loopback root using a trusted offsite source, as they won't receive security updates from your distribution. One alternative to make your loopback fs creation, tor startup, and maintenance process simpler is to statically compile your image's tor binary on an offsite, trusted computer. If you do this, you should no longer need to bother with chrooting your tor processes or copying libraries around. However, it still does not save you from the need to recompile that binary whenever there is a security update to the underlying libraries, and it may come at a cost of exploit resistance due to the loss of per-library ASLR. Ok, that's it. What do people think? Personally, I think that if we can require a kernel exploit and/or weird memory gymnastics for key compromise, that would be a *huge* improvement. Do the above recommendations actually accomplish that? If so, should we work on providing scripts to make the loopback filesystem creation process easier, and/or provide loopback images themselves? Even the APT defenses end up not working out, I would sleep a lot better at night if most relays deployed only the defenses to one-time key theft... Thoughts on that? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Towards a Tor Node Best Best Practices Document
them so as to develop countermeasures. > I suspect getting the keys through either mechanism might be > trivial compared to getting the infrastructure in place to use > the keys for a non-theoretical attack that is cost-effective. The infrastructure is already there for other reasons. See for example, the CALEA broadband intercept enhancements of 2007 in the USA. Those can absolutely be used to target specific Tor users and completely transparently deanonymize their Tor traffic today, with one-time key theft (via NSL subpoena) of Guard node keys. > I think your proposed measures might be useful for a relay > operator with a compatible system who is interested in spending > more time on his relay's security than he already is. > > It's not clear to me, though, that they improve the security > of the Tor network significantly enough to be worth requiring > them or even calling them best practices (which could demotivate > operators who can't or don't want to implement them). Did I fail to motivate the defenses? In what way can we establish "more realistic" best practice defenses that are grounded in real attack scenarios and ordered by attack cost vs defense cost? I thought I had accomplished that... > Trying to require the steps or shaming operators into following > them might reduce the number of relay operators (or limit their > growth) significantly enough to make the attacks you seem to be > concerned about cheaper ... > > Having said that, I don't see anything wrong with putting your > suggestions in a section that starts with a paragraph like: > > | Here are a couple of things you could do to improve your > | relay's security some more. Whether or not you consider > | them worthwhile is up to you and if you decide against some > | or all of them or if they don't work on your system, your > | relay is still appreciated. Ok, yes, I have no intention of making anything mandatory. It's not really possible anyways, and heterogeneity probably trumps it. For the paragraphs I've trimmed, assume I more or less agree with your statements. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Towards a Tor Node Best Best Practices Document
Thus spake Mike Perry (mikepe...@torproject.org): > You're failing to see the distinction made between adversaries, which > was the entire point of the motivating section of the document. Rekeying > *will* thwart some adversaries. > > > I suspect getting the keys through either mechanism might be > > trivial compared to getting the infrastructure in place to use > > the keys for a non-theoretical attack that is cost-effective. > > The infrastructure is already there for other reasons. See for example, > the CALEA broadband intercept enhancements of 2007 in the USA. Those can > absolutely be used to target specific Tor users and completely > transparently deanonymize their Tor traffic today, with one-time key > theft (via NSL subpoena) of Guard node keys. Btw, before the above causes someone to jot "Enemy Combatant" down in a file somewhere, I just want to clarify that I believe "lawful intercept" is a total sham, dangerously weakening critical infrastructure for little gain. Once deployed (too late!), it can and will be exploited by a wide variety of actors (too late!). Also, replace "NSL subpoena" with "any variety of intimidating thugs with guns (and/or money)". They're pretty much the same level of "due process" IMO. Further, I think we can expect many/most relay operators to run straight to the EFF/ACLU/FBI in the event of coercion (destination depends on adversary). However, I do *not* believe we can expect the same from arbitrary datacenter admins. Hence, I feel that one-time key theft is a valid and realistic adversary, given current weaknesses in the Tor protocol and client software. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Towards a Tor Node Best Best Practices Document
Thus spake Fabian Keil (freebsd-lis...@fabiankeil.de): > > You're failing to see the distinction made between adversaries, which > > was the entire point of the motivating section of the document. Rekeying > > *will* thwart some adversaries. > > I'm not arguing that rekeying is useless. I just think that for > most Tor relays reboots are usually not the result of a compromise > and the lack of reboots doesn't proof anything either (I'm aware > that you weren't implying this). > > For a relay operator concerned about key theft, rekeying after > a certain amount of time, even if there's no sign of a compromise, > seems to make more sense to me. They seem orthogonal, but yes, I should mention periodic rekeying if your relay machine uptime exceeds something like 6mos? 1yr? RSA-1024 probably doesn't have a very long shelf-life as-is... > > > Are "weird memory gymnastics" really that much more effort > > > than getting the relevant keys through ptrace directly? > > > > If they require a kernel exploit to perform, absolutely. If there are > > memory tricks root can perform without a kernel exploit, we should see > > if we can enumerate them so as to develop countermeasures. > > My assumption was that a root user could get the key (or reenable > ptrace) through /dev/mem without relying on kernel exploits. Apparently /dev/mem only allows access to low physical memory (<1M) and BIOS regions on most distros due to CONFIG_STRICT_DEVMEM, so it's not a sure shot by any means. After reading a few mailinglist archives about kernel.modules_disabled, it looks like there is a contingent of kernel developers who are arguing for "layered security" over "perfect security", and they are working to enumerate and close holes that elevate root directly to ring0. Even if the LKML people occasionally refuse to take their patches for old unixbeard dogmatic reasons, it looks like they are still being picked up by RHEL/CentOS and Ubuntu. But, this reminds me that I might need to add a "Auditing Recommendations" section to the APT. Technically, the truly paranoid should also keep pristine copies of their initrd, kernel, modules, and init itself, and veryify/replace them in the event of sketchy activity. But the question of how to actually verify/replace these files while using an untrusted kernel is another matter.. A few ways come to mind, but if we specify just One True Way, obviously custom rootkits could still be written to cloak against it... In my mind it's OK if some methods fail, because it's all about taking away full certainty of success and certainty of undiscoverability from the adversary. That alone will change their incentives to use the attack. > > > I suspect getting the keys through either mechanism might be > > > trivial compared to getting the infrastructure in place to use > > > the keys for a non-theoretical attack that is cost-effective. > > > > The infrastructure is already there for other reasons. See for example, > > the CALEA broadband intercept enhancements of 2007 in the USA. Those can > > absolutely be used to target specific Tor users and completely > > transparently deanonymize their Tor traffic today, with one-time key > > theft (via NSL subpoena) of Guard node keys. > > CALEA might provide access to the traffic, but the attacker still > has to analyze it. I'm not saying that is impossible or inconceivably > hard, but I'd expect it to be a lot more complicated than getting the > keys from a system the attacker already have root access. Wrt to analysis, for tagging there is none needed. It's a fire and forget method that can be deployed as an extension module to existing intercept solutions. You just use a modified stunnel (or similar TLS proxy) to embed a unique identifier into the circuits of clients you're interested in, read it out again at compromised exits, and log the traffic along with its unique ID. If a tagged circuit is created to a non-compromised exit, that exit just kills it for you instantly before streams even get attached (for example, during the client bootstrap process or during predictive circuit building), and the client happily transparently retries until it creates a successful circuit to a compromised exit. Yes, there are things we can do to defend against these attacks in the client. See https://trac.torproject.org/projects/tor/ticket/5456 for some of those. But I think we should also take this opportunity to think a little deeper about protecting and rotating relay keys in the first place. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] sustained bandwidth drop through noisetor
Thus spake Andy Isaacson (a...@hexapodia.org): > noisetor-01 was pushing 300-400 Mbps of traffic from 2012-02-15 > through 2012-04-13. Since mid-April we've seen traffic decrease > significantly; over the last week, our daily peak has been 260 Mbps > (versus 450 Mbps in March) and our daily trough has been 100 Mbps > (versus 300 Mbps in March). > > The traffic levels dropped off in a smooth fashion over a 10 day period, > April 15-25 if I read the graphs correctly. > > Has there been a change in the routing algorithm, or any other network > changes that might explain this drop? The loss of non-TBB Torbutton users might explain a drop post Apr 20th, when their ability to move tabs around was borked by FF12: https://bugzilla.mozilla.org/show_bug.cgi?id=715885#c33 But otherwise, according to https://metrics.torproject.org/consensus-health.html, all 5 bw auths are voting, and I have not changed the algorithms. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] too many abuse reports
Thus spake mick (m...@rlogin.net): > On Tue, 22 May 2012 13:29:54 -0500 > Jon allegedly wrote: > > > Yep same here, got notice today from ISP on a report of the 20th for > > alledged hacking with someone using sqlmap. the reporting ip was a > > brazilian gov ip address. > > > > I just blocked the port and kept on serving As of yet, no one has mentioned the port. Out of curiosity, is it included in the Reduced Exit Policy? https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy Also, I think the right answer is a solution like https://trac.torproject.org/projects/tor/wiki/doc/TorAbuseTemplates#SSHBruteforceAttempts rather than blocking anything on the relay side. > I assume you mean "IP address" rather than "port" here. > > Despite offering, I wasn't given the opportunity to do that. Yeah, this sucks. But hey, if you're forced to be a middle relay, you now have a lot of really super cheap options for bandwidth. You should consider shopping around. Bandwidth litterally gets cheaper every year. For example, last year, FDCservers was charging $600/mo for 1 Gbit dedicated. This year, they now provide a 10 Gbit line for that price! FDC doesn't allow exits either, but the falling price points tells me you should seriously try to renegotiate price with your ISP (or just move elsewhere) if they are degrading your service by forcing you into non-exit. Exit bandwidth is worth paying a premium for, because it does require more resources at the ISPs end in terms of occasional abuse noise. You could also try negotiating upwards if your ISP's prices are already competitive with FDC's for middle service. Something tells me they're not, though :). -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] case law on for exit nodes
Thus spake Daniel Case (danielcas...@gmail.com): > On 22 May 2012 22:12, Rejo Zenger wrote: > > > > Who knows about cases where the owner of Tor exit node was prosecuted or > > taken to court for information that was up- or downloaded using his/her Tor > > node? Basically, I'm looking for case law on running Tor exit-nodes. > > >From the Legal FAQ: > > *Has anyone ever been sued or prosecuted for running Tor?* > > *No*, we aren’t aware of anyone being sued or prosecuted in the United > States for running a Tor relay. Further, we believe that running a Tor > relay — including an exit relay that allows people to anonymously send and > receive traffic — is lawful under U.S. law. AFAIK, this is still true in the US. However, I'm pretty sure I've seen at least 3 court cases in the EU on this list (though too busy to dig them up right now). There have also been several equipment seizures in the EU that never escalated to a court case... -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] too many abuse reports
Thus spake Jon (torance...@gmail.com): > On Tue, May 22, 2012 at 3:17 PM, Mike Perry wrote: > > > > On Tue, 22 May 2012 13:29:54 -0500 > > > Jon allegedly wrote: > > > > > > > Yep same here, got notice today from ISP on a report of the 20th for > > > > alledged hacking with someone using sqlmap. the reporting ip was a > > > > brazilian gov ip address. > > > > > > > > I just blocked the port and kept on serving > > > > As of yet, no one has mentioned the port. Out of curiosity, is it > > included in the Reduced Exit Policy? > > https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy > > > > The port was 57734 - of course that doesn't mean another port could be > used Are you sure that's not the source port (which is randomized) for the incident? This is a weird destination port. If so, simply switching to the Reduced Exit Policy (or adding a reject line for *:57734) would prevent the attack from using your exit. No need to stop exiting entirely. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] case law on for exit nodes
Thus spake Andy Isaacson (a...@hexapodia.org): > On Thu, May 24, 2012 at 10:10:48PM +0200, Rejo Zenger wrote: > > > There are at least two cases where the exit operator has been > > > slapped with a 'national security' gag order and cannot talk about > > > the case. > > > > These two are both German cases? - if you are allowed to elaborate on > > that. > > I have no personal knowledge about the cases at hand, but "national > security gag order" sounds like a USA NSL: > http://en.wikipedia.org/wiki/National_Security_Letter I would also like to take this opportunity to display my "I have not received an NSL" card. I think it's still legal to do *that*, right? ;) But who knows about the upstream ISP(s) or the random stool duster on shift at the datacenter that day... To be fair, it sounded like there was a possibility the NSL might have been more like "WTF just happened? Was that really a Tor node?" But who knows. It could have been "Give me dem keys, or else!" kind of thing. My vote would be "time for new keys" in that case. Maybe we should get a legal opinion on if these things can actually be arbitrarily coercive in nature. "Give me your key. Also, keep using it. Also, tell your mother you hate her and wish she was dead." Does the madness ever end? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] How many PK operations does a typical home-run relay or bridge do in 24 hours?
Thus spake Karsten Loesing (kars...@torproject.org): > At the Florence hackfest I was asked for the typical number of RSA > operations performed by a relay or bridge, say, per day. We're mostly > interested in home-run relays and bridges on DSL lines and similar, > because that's where Torouter devices will be deployed, too. So, > Amunet and TorServers are out here. :) Dude, if crypto acceleration works out on these things, 8 of them shoved in a 1U space might be cheaper to deploy than a beefy 8-core 1U machine. Of course, most sane datacenters might consider this a fire hazard, unless we can create some sort of safe racking harness for them... > Tor has a built-in feature to count PK operations. If one sends a > USR1 signal to the tor process, it writes a line like this to its log > file (though this line comes from a client): > > Jul 10 18:31:22.904 [info] PK operations: 0 directory objects signed, > 0 directory objects verified, 0 routerdescs signed, 2968 routerdescs > verified, 216 onionskins encrypted, 0 onionskins decrypted, 30 > client-side TLS handshakes, 0 server-side TLS handshakes, 0 rendezvous > client operations, 0 rendezvous middle operations, 0 rendezvous server > operations. > > Of course, if someone has more thoughts on measuring how many PK ops a > relay or bridge does, please say so! :) What does the log line mean? It looks like these are counts since startup? I assume your plan is to divide by the total uptime of the relay? Does SIGHUP clear them? Can they get cleared in other sitations? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Call for discussion: turning funding into more exit relays
Thus spake k...@damnfbi.tk (k...@damnfbi.tk): > Hey all, > Have you contemplated sending this over to the hackerspaces list? There exists THE list for hackerspaces? Well hot damn. Are these them: http://lists.hackerspaces.org/mailman/listinfo/ Is there a specific sub-list we should focus on? Announce? Discuss? Other? Also, how do we recognize reputable Hackerspaces from "Sketchy bunch of d00dz who think it will be totally awesome fun to pwn a bunch of Tor users?" Should we check for previous reliable Tor relays from them? Should we just not care? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Call for discussion: turning funding into more exit relays
Thus spake Nils Vogels (bacardic...@gmail.com): > On Tue, Jul 24, 2012 at 9:17 AM, Mike Perry wrote: > > > Thus spake k...@damnfbi.tk (k...@damnfbi.tk): > > > > > Hey all, > > > Have you contemplated sending this over to the hackerspaces list? > > > > There exists THE list for hackerspaces? Well hot damn. Are these them: > > http://lists.hackerspaces.org/mailman/listinfo/ > > > > Also, how do we recognize reputable Hackerspaces from "Sketchy bunch of > > d00dz who think it will be totally awesome fun to pwn a bunch of Tor > > users?" Should we check for previous reliable Tor relays from them? > > Should we just not care? > > It's funny this comes up now :) I know for a fact that most Dutch > hackerspaces either run a tor node, or have a member running a Tor node. > Their motives have never been questioned, so why start now :) Yeah, I was asking a subset of Roger's parent question: "Should we fund new relays by new people, fund new relays by existing community members, or fund upgrades to existing relays by existing community members?" I think if we just start dumping money on total strangers who have never run Tor exits before, it is less likely to lead to a stable outcome where those exits continue to exist. > In most countries there is a foundation covering multiple hackerspaces, > these are usually where you'd want to start. If you need some more contacts > in the Benelux and UK area, I can lend a hand. Good suggestion. I do generally agree that hackerspaces are a great untapped potential for running more Tor nodes. It is definitely something that should be explored. Not sure who (if anyone) is tasked with driving this whole exit sponsoring initiative yet, though. I also like the idea of favoring larger, better organized hackerspaces that are more likely to be able to continue to manage their exits over the long term. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Exit Port Usage Statistics for "Allow all" nodes
Thus spake Moritz Bartl (mor...@torservers.net): > At torservers.net, we run some large exit relays with an "allow all > except port 25" policy. > > These are statistics from ARM showing exit port statistics of a fast > exit running for seven hours at 30-40 MB/s: > > 443 HTTPS 17650 (%55) > 80HTTP10625 (%33) > 8344 739 (%2) How long has this relay been up using that IP? Was the exit only 7 hours old? And is this read+write, or just one direction? Here's the read and write statistics from the ExtraInfo descriptors from a handful of the fastest default-policy and reduced-policy relays: Default exit lumumba read 819.7M other: 66.5% 80: 22.7% 443: 5.1% 51413: 1.4% 6881: 1.2% 182: 0.8% Default exit lumumba wrote 257.7M other: 91.4% 443: 1.9% 80: 1.8% 6881: 1.6% 62686: 1.5% 51413: 1.5% Default exit chomsky read 744.0M 80: 44.7% other: 40.1% 443: 10.0% 182: 1.1% 51413: 1.0% 4000: 0.7% Default exit chomsky wrote 164.3M other: 85.2% 443: 5.1% 80: 4.8% 51413: 2.7% 32460: 1.0% 6881: 1.0% Default exit politkovskaja2 read 872.3M other: 70.5% 80: 19.3% 443: 4.2% 51413: 1.8% 182: 1.1% 6881: 0.9% Default exit politkovskaja2 wrote 292.4M other: 92.8% 443: 1.6% 12762: 1.4% 51413: 1.4% 80: 1.4% 6881: 1.3% Default exit politkovskaja read 887.8M other: 70.7% 80: 19.1% 443: 4.2% 51413: 1.5% 6881: 1.0% 23877: 0.8% Default exit politkovskaja wrote 301.3M other: 93.0% 443: 1.7% 51413: 1.5% 80: 1.5% 6881: 1.3% 45682: 0.8% Default exit rainbowwarrior read 695.0M 80: 43.9% other: 41.7% 443: 9.5% 182: 1.4% 51413: 0.9% 6881: 0.6% Default exit rainbowwarrior wrote 148.5M other: 86.9% 443: 5.3% 80: 4.7% 51413: 1.4% 6881: 1.3% 8333: 0.2% Misc Exit bouazizi read 662.9M 80: 78.4% 443: 20.6% 8080: 0.3% other: 0.2% 22: 0.1% 8333: 0.1% Misc Exit bouazizi wrote 30.4M 443: 56.5% 80: 42.9% other: 0.2% 8080: 0.1% 22: 0.1% 8333: 0.1% Misc Exit assk2 read 314.6M 80: 78.1% 443: 20.4% 563: 0.5% 8333: 0.4% 8080: 0.2% 3389: 0.1% Misc Exit assk2 wrote 13.3M 443: 55.1% 80: 42.9% 8333: 1.1% other: 0.9% 8080: 0.1% 3128: 0.0% Misc Exit assk read 275.6M 80: 78.6% 443: 20.3% 8333: 0.5% 8080: 0.3% 3389: 0.1% other: 0.1% Misc Exit assk wrote 12.4M 443: 55.5% 80: 41.9% 8333: 2.0% other: 0.6% 8080: 0.1% 995: 0.0% Misc Exit Amunet6 read 265.1M 80: 78.0% 443: 19.6% 8080: 1.0% 22: 0.9% 3389: 0.1% 8333: 0.1% Misc Exit Amunet6 wrote 10.7M 443: 54.2% 80: 44.7% 8333: 0.4% other: 0.4% 22: 0.1% 81: 0.1% Misc Exit noiseexit01b read 250.2M 80: 78.0% 443: 20.5% 8080: 0.5% 81: 0.4% 8333: 0.2% 995: 0.1% Misc Exit noiseexit01b wrote 10.9M 443: 53.0% 80: 45.2% 8333: 1.1% other: 0.5% 8080: 0.1% 995: 0.0% Misc Exit raskin read 670.2M 80: 77.6% 443: 21.2% 8080: 0.3% 563: 0.2% other: 0.2% 81: 0.2% Misc Exit raskin wrote 30.3M 443: 57.3% 80: 42.1% 8333: 0.3% other: 0.3% 995: 0.0% 8080: 0.0% -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Exit Port Usage Statistics for "Allow all" nodes
Thus spake Steve Snyder (swsny...@snydernet.net): > On Wednesday, August 15, 2012 4:44pm, "Mike Perry" > said: > > Here's the read and write statistics from the ExtraInfo descriptors > > from a handful of the fastest default-policy and reduced-policy > > relays: > > > > Pardon my tangent, but: The enormous discrepancy in read/write values, > if accurate, makes a mockery of AccountingMax for purposes of tracking > bandwidth used. These are per-port exit stats. I believe AccountingMax is for *all* relayed traffic. For every byte the exit writes to a port, that is a byte it had to read on an orconn from another relay. Similarly, for every byte an exit read from a port, that is a byte it has to write to an orconn to another relay. Thus, for exit relays, *total* upstream and downstream will be mostly symmetric. There may be some discrepencies for packing data into 512 byte cells, though. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Massive ongoing google groups spamming
Thus spake Moritz Bartl (mor...@torservers.net): > On 07.09.2012 00:15, tor-admin wrote: > > Running an Exit without custom WHOIS, all abuse message are received by my > > ISP and forwarded. So I have to temporarily ban google groups. How do other > > operators deal with this? > > We receive the same reports and answer them like always; now, as they > are still coming in, we ignore them. Yes, we even have a template for the Google Groups case on the wiki: https://trac.torproject.org/projects/tor/wiki/doc/TorAbuseTemplates#GoogleGroupsSpam But if the same person keeps sending you the same abuse message as opposed to actually trying to get Google to block the offending authenticated Google account, well that person has spamming issues of their own. Are Google Groups accounts easier to create than Gmail accounts for some reason? I was under the impression Gmail has hated on account creation over Tor for some time now.. If that is still true, it's likely this new abuser uses both Tor and non-Tor... Thus simply blocking Tor from Usenet (even if we could) as the abuse complaint demands is unlikely to stop the abuse. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Can you double check my exit policy for usefulness while minimizing complaints
Thus spake Nate Homier (t...@universal-mechanism.org): > I was wondering if I have a good compromise between not allowing > BitTorrent and allowing enough ports to be useful. Here's mine. I think the better question is "Why do you think you should remove the ports you removed from the ReducedExitPolicy?" If you can't answer that question, you should just use the ReducedExitPolicy. > How does this compare with this policy located here: > https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy > > Should I use the official Tor reduced policy or is mine good enough to > be useful while minimizing complaints. If you're already going to run an exit, it is best to be as permissive as possible. It is a bad idea arbitrarily restrict the apps that people can use Tor for without very good reason. After you remove bittorrent, most of the abuse mail you'll get will be due to 80 and 443 anyway. There are also technical reasons to avoid having 1000 slightly different versions of the reduced exit policy. Hence the reduced policy allows every app port that we could find in use, *except* bittorrent. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] [tor-talk] Skype banned tor-nodes?
Cross-posting to tor-relays to give everyone the heads up - I just added 33033 to the ReducedExitPolicy page for Skype: https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy Non-relay related discussion should remove tor-relays from the Cc. James Brown: > Hello, people! > > Because many my contacts continue to use Skype and don't want to use any > nice soft (such as jabber, torchat and etc.) I need to use it too. > For protecting of my location and etc. I use it only through Tor (on VM > working under transparently-torifed user). > Some days ago I would be very, very difficalt to connect to skype > through the Tor-net (earlier it was very easy). > As can I see in my Vidalia, my skype application can make resolving > through tor and can connect to hosts through port 443, but conncetions > with port 33033 cannot establish (I can the the trying to open it and > after it it closed immediatly). > As can I see the skype doesn't work without that connections which now > establishes very rarely. > I have 2 versions: > a) the Skype team ban the Tor-connections to skype (but I can still > connect to skype to ports 443 and I can to browse their web-site and > even log in to it through the Tor without any problems) > or > b) many Tor-exits don't let the above-mentioned port (33033) in their > ExitPolicy. > It is not good from the Skype team if they do the first. > If the second hypothesis is right I ask owners of Exit-nodes, if it > possible, to let that port in their ExitPolicies. Not sure if that's actually the problem, but if the only way you can get to Skype is to use a Bittorrent-supporting exit, it certainly seems like a possibility. Thanks for the heads up James! -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] [tor-talk] Tor 0.2.4.13-alpha is out
resents a bugfix on a previous bugfix: the original fix > attempted in 0.2.4.10-alpha was incomplete. Fixes bug 8235; bugfix > on 0.2.4.1-alpha. > - Give a less useless error message when the user asks for an IPv4 > address on an IPv6-only port, or vice versa. Fixes bug 8846; bugfix > on 0.2.4.7-alpha. > > o Minor features: > - Downgrade "unexpected SENDME" warnings to protocol-warn for 0.2.4.x, > to tolerate bug 8093 for now. > - Add an "ignoring-advertised-bws" boolean to the flag-threshold lines > in directory authority votes to describe whether they have enough > measured bandwidths to ignore advertised (relay descriptor) > bandwidth claims. Resolves ticket 8711. > - Update to the June 5 2013 Maxmind GeoLite Country database. > > o Removed documentation: > - Remove some of the older contents of doc/ as obsolete; move others > to torspec.git. Fixes bug 8965. > > o Code simplification and refactoring: > - Avoid using character buffers when constructing most directory > objects: this approach was unwieldy and error-prone. Instead, > build smartlists of strings, and concatenate them when done. > > ___ > tor-talk mailing list > tor-t...@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] [tor-talk] Theft of Tor relay private keys?
thomas.hluch...@netcologne.de: > Hello all, please help me understanding this: > > supposed, the NSA were able to break into a large scale of tor nodes, > stealing the tor private server key. At the same time they were able > to gather all traffic they can get. Wouldnt this increase the > likelihood that data from complete circuits can be decrypted and > traced back to the original sender? If their intercepts are passive, merely stealing relays' private identity key won't accomplish much because Tor uses Forward Secrecy for both the relay TLS links and for circuit setup. https://en.wikipedia.org/wiki/Perfect_forward_secrecy However, if their intercepts are active (as in they can arbitrarily manipulate traffic in-flight), then stealing either Guard node keys or directory authority keys allows complete route capture and traffic discovery of targeted clients. So you are right to worry about this, in my opinion. I am also very concerned. I want to make some changes to Tor to make such key theft easier to detect, less damaging, and harder to make use of: https://trac.torproject.org/projects/tor/ticket/7126 https://trac.torproject.org/projects/tor/ticket/5968 However, I want to do a lot of other things in 0.2.5.x though too, and there's this whole browser thing that I'm technically supposed to be paying attention to too, so I might not get to those. :/ That first one (#7126) is probably a good volunteer/student project for someone who likes Python, though. It would be easy to make a prototype with Stem, txtorcon, or even TorCtl. > Maybe this is paranoid, but this arises the question for me: what > would happen when I stop my tor node once a week, throw away my keys > and restart tor so that new server keys were generated. So NSA would > have to break into my system again, but this would make it harder for > NSA agency to decrypt circuits where my host is involved. I am in favor of regular identity key rotation for relays, and I want to work towards supporting that better by default: https://trac.torproject.org/projects/tor/ticket/5563 I think keeping your keys on a ramdisk or encrypted filesystem with a memory-only random key (so if you experience unexplained reboots, etc they go away) is a good idea. Also, since Tor reads/creates its identity key at startup and doesn't need the file afterwords, you can even 'shred'/'wipe' it after that, so the adversary can't easily pull them off the FS while the system is running. Better still, you can load a Kernel module to disable gdb debugger support so the adversary has to actually dig through/manipulate raw memory to get the key (which will be error prone and is more likely to lead to crashes/panics/reboots): https://gist.github.com/1216637 I started describing these and related ideas here: https://trac.torproject.org/projects/tor/wiki/doc/TorRelaySecurity But I got distracted by more pressing issues before I could finish the scripts.. Also, many of those encrypted+authenticated Tor container things probably don't make much sense without Secure Boot to authenticate the boot process up until you can start up Tor. :/ > What were the disadvantages for the tor network? Would this confuse > the tor director in any way? Sort of. Weekly identity key rotation is too frequent to recommend for people to do for a few reasons. First, it takes the bandwidth measurement servers a couple days to ramp up your capacity of your new identity key, so you will spend a lot of time below your max throughput. Second, you would also likely never get the Guard flag. Third, there are also load balancing issues with Guard nodes where as soon as you get the Guard flag, it will take 1-2 months before clients switch to your new Guard, so you will also likely spend that time at less than your full capacity. > Would it be able to keep the nickname or would it have to change also? > Would this have effect on the onion address if I had a hidden server? No and no, but your hidden server might have brief downtimes/descriptor publish times that correlate with your key rotation. Not sure how severe that is in practice. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Final Warning Notice
Lunar: > Chris Sheats: > > Hey tor-relays, > > > > The past few months, since I upgraded my net connection to 1Gbps, I've > > hit the top 40 fastest relays and the top 20 fastest exit nodes, > > peaking to over 17 MB/s. I've always prided the fact that my ISP, > > CondoInternet in Seattle, has been very welcoming of my reduced exit > > node. In the past, the malicious activity hasn't been "too much" for > > my ISP--examples here: http://yawnbox.com/1461--but now they want me > > to shut it down. What are my options? By "reduced", were you using the ReducedExitPolicy? This would eliminate the bittorrent complaints. It sounds like you were, but I wanted to confirm (and your node is no longer in the consensus :/). https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy > Is their problem the amount of work they have to do because of the abuse > and legal complaints? Then offer to handle them directly. > > The best way to do so is to become the contact address for the IP. With > your Regional Internet Registry, the process is usually called SWIP [1]. > The issue you might run into is that SWIP is only available for a > minimum of 8 IPv4 addresses. So they might charge you more and you might > have to switch to a new IP address. > > You probably should switch to a non-exit policy while negociating. If > you and CondoInternet are not able to find a process where you could > handle abuses directly, fast non-exit relays with good bandwidth are > still a very useful contribution to the network! (and they would not get > any legal complaints) Yes, I want to emphasize the value of being a high capacity non-exit relay. I want to investigate various types of padding for Website Traffic Fingerprinting and correlation, and I think that if we end up having more Guard bandwidth than Exit bandwidth, we can write parameters into the consensus that instruct clients to use this extra capacity for padding: https://trac.torproject.org/projects/tor/ticket/7028 Did they shut you down entirely, even forbidding non-exit for some reason? Or did you decide to move to a new ISP that supports exits? -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Final Warning Notice
Chris Sheats: > Mike- > >> Is their problem the amount of work they have to do because of the abuse > >> and legal complaints? Then offer to handle them directly. > >> > >> The best way to do so is to become the contact address for the IP. With > >> your Regional Internet Registry, the process is usually called SWIP [1]. > >> The issue you might run into is that SWIP is only available for a > >> minimum of 8 IPv4 addresses. So they might charge you more and you might > >> have to switch to a new IP address. > >> > >> You probably should switch to a non-exit policy while negociating. If > >> you and CondoInternet are not able to find a process where you could > >> handle abuses directly, fast non-exit relays with good bandwidth are > >> still a very useful contribution to the network! (and they would not get > >> any legal complaints) > > > > Yes, I want to emphasize the value of being a high capacity non-exit > > relay. I want to investigate various types of padding for Website > > Traffic Fingerprinting and correlation, and I think that if we end up > > having more Guard bandwidth than Exit bandwidth, we can write parameters > > into the consensus that instruct clients to use this extra capacity for > > padding: > > https://trac.torproject.org/projects/tor/ticket/7028 > > > > Did they shut you down entirely, even forbidding non-exit for some > > reason? Or did you decide to move to a new ISP that supports exits? > > I turned Tor off voluntarily, and have been planning on reconfiguring > my node for relay-only traffic. In previous correspondence, I asked if > there were any other Tor Exit's on their network, and they said no. So > this isn't a good precedent for TorProject/Seattle volunteers > considering that they provide 100 and 1000 Mbps service. Yeah, this is the flip side to my suggestion of switching to non-exit.. In terms of advocacy for Tor, it may be more important to send them a message by taking your business elsewhere. I guess it all depends on how expensive their service is, and if you would keep using it anyway for other purposes. How much does the service cost? And you only get 1 dedicated IP at 1Gbit, or do you get more? Note that only 1 IP means you can only run 2 Tor instances on that, and even with AES-NI, each Tor instance probably caps out at about 300Mbit at most. Without AES-NI, you probably could only push 100-150Mbit per Tor instance... -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] hardware
Andy Isaacson: > On Thu, Jul 11, 2013 at 08:46:20PM +0200, Andreas Fink wrote: > > can someone give me hints on what hardware would be best suited to run > > big fat tor exit nodes connected with multiple 1gbps or 10gps links? > > We are considering putting some fat boxes near major internet > > exchanges of the world. > > Modern Xeon, AES-NI is helpful, HT is not very helpful (but not hurtful > either), higher clock rate is more helpful than more cores. 4GB of RAM > per core, you can probably get away with 2GB/core but why skimp. > Noisetor uses most of a 4-core X3350 2.6 GHz to push ~500 Mbps > symmetric. That's without AES-NI, so I'd expect a quadcore 2.5 GHz > AES-NI to be able to fill a 1Gbps pipe. This sounds right (~100Mbit per CPU core without AES-NI), but it would be good to hear Moritz weigh in here with some additional datapoints for AES-NI. Last I heard, AES-NI gets you ~300Mbit per core, but I have no direct experience myself. The key thing to know is that Tor is still not great at multithreading. In fact, the torrc option 'NumCPUs' is mostly useless for relays at this scale. For this reason, you want to run one tor daemon per CPU core, with a max of two per IP, and something like 2-4GB of RAM per daemon like Andy said. That's why we have noiseexit01a-d, Amunet1-8, manning1-2, etc. You probably also shouldn't run too many of these sized relays by yourself, either. It is generally considered poor form to run too much of the Tor network by yourself until other people can catch up and balance your efforts. I would look for ways to decentralize/delegate once you got beyond a couple gbits or so for this reason. Please feel free to ask the list for suggestions on legal and admin structure for accomplishing this. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Overload data for Exit vs Non-Exit (and Guard vs Middle)?
To try to get to the bottom of the recent influx of clients to the Tor network, it might be useful to compare load characteristics since 8/19 for nodes with different types of flags. People with Munin setups: it would be especially useful if you could post links/graph images for connection counts, bandwidth, and CPU load since 8/19. I'm particularly interested if people with just the Exit flag (w/o Guard) are seeing increased connection counts that aren't explained by uptime or ramp-up. So far the only datapoint I have for this is https://www.torservers.net/munin/torservers.net/psilotorlu.torservers.net/index.html#network but it looks like that node just rebooted and possibly rekeyed, so it's connection increase could just be due to that. It would also be interesting to see if Guard+Exits are seeing a greater increase in connection counts than just Guards. I'm also wondering if any aspects of the load has any other relation to node flags. -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] New GPG key for Mike Perry
Hi everyone, I've finally made a new GPG key (after a scant 7 years!). This new key will be used to sign email from me going forward, and will be used to sign software releases until such time as I get around to creating a second set of keys on a hardware token for that purpose. While I dislike the Web of Trust for a number of reasons*, my plan is to cross-certify these two sets of new keys, and also sign both with my old key. Hence I will not immediately be issuing a revocation for my old key. The new key is attached, and is available on the keyservers (with a signature from my old key) at: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x29846B3C683686CC Here's the fingerprint and current subkey information for reference: pub 8192R/29846B3C683686CC 2013-09-11 Key fingerprint = C963 C21D 6356 4E2B 10BB 335B 2984 6B3C 6836 86CC uid Mike Perry (Regular use key) sub 4096R/717F1F130E3A92E4 2013-09-11 [expires: 2014-09-11] sub 4096R/A3BD8153BC40FFA0 2013-09-11 [expires: 2014-09-11] This message should also be signed by my previous key, which was used extensively to sign my email and my source code releases prior to today. * Ensuing flamewars about the Web of Trust should reply only to tor-talk. -- Mike Perry -BEGIN PGP PUBLIC KEY BLOCK- mQQNBFIvu6sBIADC1JsxXSWd1k+cHamS0L5/dfcGQ3AaVbTAM+82JEO5drL3E5xD 9nO2KujNIRqYClCt395S9zIZuMPTo+r5UtKQhP3g+ZxxTuZeYu0pH/7iewyE477o QzkPp04rwmMpmPxvgov9jVmtshoVu1ae+IgIr0AvcIApIcUy7q4+yT9TJwVrvF/Y xJ+rUIVIZc2MkisxmSaE4q45w0kUYxnCW0FUiO7T6G1cfRhTnLv0NipfOnpKnqm1 PEwZKru7JiopuSPK1gRpdsOzGSVk8OFmuojFkl4rymA1T+HOEEA7xyD8ZDpuedGt u0JM3GFS29/4f6qoEBTQNV2OaSKB89a4KI+BFwVe9XuLEqaeYUd2RPnQhanTWPO0 s3+K5ccn2TnV8HBEGKJgU7EKuWy++k2Svspt2oAqPip6GKrhP66i40p1mcKjcVyw VB7NmyR1pse/yKInzmuuuD6csFbOnPCUYVwPyyjEC5IpqZe5hPszPL1XXUwipxk/ pDyDI5KzKYJxR+wuPTY9YV3tHWE9FyZDHFYOHgpQVDxlyiBxDVUHeI6hu39WzUN3 yYAssaRz9GaqQEjp0iN++X2BMmmgjkBrHSHpN7mbjtf84hMilQ7McWOeQKudedE1 /z65TheUtiYS60Ybq4FIv9FOrHJt/pzHPzuv7jcdOlsLl6SbWkwwS1y+GmMRpND5 8ZeJMTStHcZhWxxu9DC4fNsTEPHO44yxaIdJYkasawcf87gPSYrqeUJP/1xJDwdz zPY6wbfXQJg7w5j8qlQ7lomRuU384szOvjaN9QIboR6zvxPVZcGcUX40BVQOlglH qBIchVQQ2vROs2wkvV9qSfnauBKf8dM/LKUgQvoNCSHqkmS+siOij19moclsvH7x Rgfft3WhMapygCBNFNLRqw5iXJwlWQZ4mvwbro3KWelWzm5FqQdCfP2fMA5n53bC cOS308KllDAFK/Ljnm938PEyh4rNA9eyEazCA9WCuE+zEeIMgQPr1K/lFgGldftL 7oWAtfCae+jYWyXS+1zAxEQ3QGqHLmUDYumzc30paHaGeapldkcySOc9SLWDdpvH 0V91vU17WeytQD9pGBUNURc+/v1ZNG7fRm+Ulp6K0i/eh/3rKWybx8aanu3YvcNP Uyom/CA5gBmzIATlhD8vpc95YQpV+Jv4TN4crD0EIUZDzzv1Eg5Pix8qk4R4jZ81 oVvvqlOAFYq96SyPWGUL5mAMrGD+RSmzLpTNH8LUEIQ7RosFQjcHNFzF6sPuG7HR 03R/aNdexFqfjsK/qntSk+vL0jtu++lp18U6UFxbHQVr32vxFybNJwAZCmK8K+Ur 2kezKkWqcQrV9jXNA4IAz+H+KPRi2+T3Jss5ABEBAAG0N01pa2UgUGVycnkgKFJl Z3VsYXIgdXNlIGtleSkgPG1pa2VwZXJyeUB0b3Jwcm9qZWN0Lm9yZz6JBDgEEwEC ACIFAlIvu6sCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJECmEazxoNobM CM8f/iLOQEZk362/aUixdi/BBli1dTNjzQBU61lt4xo2U9UOCiR5o3tcON4eLboU Nt/H9VNFGgc3udwtG8fI3LcI+OhCPobVVbsZQO44cN8+FOx0w1iD4DOvVhrlpar+ kIO5KOpf+zo6nkfd6WxaZQXziqTVWEeQbfvmBnlLx+ea3C9xPWOVVbesYFVnRSLD WLgIo3HGhXjQBJ4dGvJULbBdQueEfZpSG7pscFITd3QX9Wk3N27CoYt66eM2YCl5 vN094p7fSn+2w0SZXYte/rqcklvSE7uH7h0K2eqdg+uwYkkq7bLXEamgbcrVZwsF NLH6zgVZtzkJJikmqSPwHq88cgclXWne+MdAjEATa5YKPfjnX83dgJZo1N2fgtqR f6ieLnhWGPdaHPENDnU2ADhGQKADXqM4h1HiY3giS66eVzYFUN0yOrL7rp8d0ePz xZLUIxJN/fdIGEIEjuWciW4i8VLlIQDUGEDs6ULYhSmc86bd3eVSpEEi562f6knj /TZelcxu3C+CHTee/961iaefd5c+Mp5bIt+VAC5YHyZ29MUX8GXfiQdbtuJGOrrB T/V99SvyG2Rq8I+6v0oE7MeU7QBU3LzWsZfZvQ3APOFth0d4fLqU9H/TQAt+8lK2 xcHynz7I6JW8xi6iLvA+sftb7vdO0WGOaFbHfPeL7ErlVMMF8W5ed5gTBimRm0Px V19Rq56y6v8B6DoGg9u34JetQ2odUiAy2/ttqfGohYdGSzBuP5VamF25tBeTYChL gW6L6utARQtmf8Q2ONdyPzNStrv6rVj2bMwtuPurBIx727LPgaUO6ZUMo7lC59bv R2pn4vEvx2aXdqjL3dfNs5UJ1UEvx0PNELFHZ5bStpMt3fGgh8ZDgcUrzsY9ine6 0qR/ujCLLMrD8RmZezT4/v0CkF0RJ1hEW4jjCaOT/an4dVxnqIPoNc9g/msTmYHE QusGrrVx7cmjy5ph/HJe83tqhv3TUj2UYwIllKpgTP0azu9xIvVH3qAWgnHdonSJ CQi5GjPN3h0Xfk/9+nywP4R1YIuMehmZo0RQk96aPhds2C14bBykh4gtbuVewuj6 GlciAtO7VZr/ZKeAaO5gph7XVC+4jxhdsTOSYDi+uMuZ48NmkZQRTIA82O3GagTa 3WYuu4JPPmSn4C6zAZMLnpXL6gLGDIQ/HlJht/X/+ZduFB3UmfnQEeqo0Y2kxgqv pX9p4oMHKw5nkOb+T1r/f5ulGX0u2z2pPobQLas6F13wRKvg78cjbVES7BrREwl+ egljA2xEh0NYjR8y8U4bpZrseisbsju3FiaVXW/YxEVaXFdIqA9chkYBn4hNmyup hfdiSwmfhxuCFeOstFwTB0Kp2AGISgQQEQIACgUCUkUV1QMFC3gACgkQGwyjDN3G wK1vWgCfQrWCwHXM30r48jFUakPrm2UacmEAnRe6I7zj2CVyzog4Ll98bfg+ZsSn uQINBFIvvI0BEADuWK4crs6xahu1/gW0sAYhOU5x7oIQtOAGbCdd0crb/M+bbYYs hj1NN/D53q38r45mHEM/TyuR6dqxUrbEBFzDWS95T9IYVKtkm1zHHttrdpms/tDH KzQ024wkVDPDSlqpSrbjNgwODdlp3ZqYsDGEbK9aFypC469GvfaWk8Z8JuTY9ucU 7IPCSq0LlYfcunJO3P81d7BhVVUHzOIecQ8h/WEUhrNiYoLBksvYl39LEzYfGCcx 6LYjkMenF6n3Xr2n9rbFeMxIkCr66DL+J0WwE/kCaVJAZXQ3hp4FinGQf0nCc4Vt Ik+fFZjwruBE7WwEuLyBjot++aJcxPXCDmjyn592AasxmfA5iG2Py79AN3rxC3N8 MmoJoOn7+W4F4zxKsYc/FiWoWvI3llwiDIsFtWjJ4XUZZOlvBsRLrJVnxe338LQG BYH6srMy6tp1qBrmA2KBuT5zwqzsfUAC8UQwL9ljUo42KxkSQxrBF6tAXlhy/yqa NEAnuXZC0G+iPCfq0ijHtcE65r4VwcXguLc0gOIKZn9JESpmT+tdYzoYVdAxnhHN 8mCznewvFSZps+BxTG+sBCL
Re: [tor-relays] bandwidth authority algorithm is cracked
nothing more than ten or fifteen minutes in any week. > > > > The local node bandwidth calculation is consistently 490-495 > > Kbytes/sec. Very stable. Very consistent. > > > > The Tor bandwidth authorities assign values anywhere from 100 > > Kbytes/sec to almost 700 Kbytes/sec in an oscillating pattern with a > > period of about one week. > > > > Something is seriously wrong with that. > > > > ___ tor-relays mailing > > list tor-relays@lists.torproject.org > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > > ___ tor-relays mailing > > list tor-relays@lists.torproject.org > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > > > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Would be good if Tor were more multithreaded.
The NumCPUs option only causes OnionSkins (circuit creation requests) to be processed on additional threads. This is only a portion of the CPU cost of a tor relay (doesn't include TLS, circuit-level AES, packet handling/memory operations, etc), and now that the network is switching over to the considerably faster ntor circuit handshake, it will become an even smaller one. Making Tor fully multithreaded for all of its CPU-intensive operations is a large task. For now, the best option for high speed relays is to run multiple Tor daemons with separate OR ports. If you have AES-NI, you can do about 300Mbit per Tor daemon. If not, you can do about 100Mbit per daemon. You can run at most two daemons per external IP, so if you have a higher capacity uplink than that, you have to get more IPs from your provider. Sebastian Urbach: > Hi, > > Did you consider the numCPUs option ? > -- > Mit freundlichen Grüssen / Sincerely yours > > Sebastian Urbach > > -- > Those who would give up essential Liberty, > to purchase a little temporary Safety, deserve > neither Liberty nor Safety. > -- > Benjamin Franklin (1706 - 1790), Inventor, > journalist, printer, diplomat, and statesman > > > > Here's where top hangs out on Libero. Seems it would be a better > situation if Tor would actually use the second core. > > top - 12:05:07 up 5 days, 21:35, 1 user, load average: 0.33, 0.43, 0.34 > Tasks: 130 total, 2 running, 128 sleeping, 0 stopped, 0 zombie > Cpu0 : 47.2%us, 21.0%sy, 0.0%ni, 17.6%id, 0.0%wa, 0.0%hi, 13.8%si, 0.3%st > Cpu1 : 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > Mem: 1914136k total, 718704k used, 1195432k free, 139588k buffers > Swap: 3981304k total,0k used, 3981304k free, 116320k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND > 1340 _tor 20 0 566m 224m 36m R 71.7 12.0 6168:37 tor > 9887 nobody20 0 21644 13m 892 S 7.3 0.7 0:31.85 scamper > 9889 root 20 0 102m 916 772 S 0.7 0.0 0:02.80 sed > 9921 root 20 0 15028 1312 992 R 0.3 0.1 0:00.19 top >1 root 20 0 19232 1480 1212 S 0.0 0.1 0:00.27 init >2 root 20 0 000 S 0.0 0.0 0:00.00 kthreadd > > > > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] How effective is "NumCPUs"?
Or, you know, you could just run one tor daemon per core as has been suggested. Thanks for your understanding and your patience with us while we work on this and a couple other slightly difficult and pressing engineering problems. Christian Dietrich: > I've got arround 200 mbits with an Intel Xeon E3-1230v2 (not over > 30% total cpu usage - 1 core at ~100%). > Pretty slow for an dedicated gigabit connection, due to this fact > i've killed my nodes. > The ticket for this "problem" is still not solved, after 3 1/2 years. :[ > > quote from the ticket(would sign that): > > May I suggest to get this at critical priority? > 21th century crypto software can't afford to be not fully-threaded ;) > No CPU sold today is mono-core anymore, and I sure few people would > run a tor dedicated relay up 24/24 to see it used at only 1/n'th of > its capacity. > > >On Jan 24, 2014, at 10:49 , Alexander Dietrich wrote: > > > >>Hello, > >> > >>a relay I'm running is currently at about 0.80 load average. It has a > >>dual-core CPU and I have configured "NumCPUs 2". I'm still in the process > >>of finding the bandwidth limit. > >> > >>Should I keep increasing "RelayBandwidthRate" on the single Tor process, or > >>is it a better idea to start a second process? > >In my experience, CPU load does not depend much on the amount of traffic, > >but much more on the number of connections/handshakes. > >___ > >tor-relays mailing list > >tor-relays@lists.torproject.org > >https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > > ___ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Problems with domestic ISP blocking publicly listed relays
mick: > On Tue, 28 Jan 2014 19:02:32 + > Paul Blakeman allegedly wrote: > > > > > SO… > > Can using a Tor relay result in your IP getting a “bad” flag? > > Yes. Running a Tor node on an IP address you share with your domestic > usage can result in you being unable to reach sites which blacklist Tor > nodes. This sometimes only happens with exit nodes, but some site > operators are even more draconian than others and just block all Tor > IPs. This can be particularly unfortunate if the site in question is > your bank. This is correct, *if* you are running a Tor relay (even a non-exit). And unfortunate. > > Is there anyway of running a relay where you “hide” your IP? > > No. Tor relay IP addresses have to be visible to be reachable. This is not fully correct. You can run your Tor relay as a Tor Bridge, in which case its IP is not visible in the public node directory. We only hand it out to people who solve a captcha on https://bridges.torproject.org/bridges We're also looking for people to run Obfsproxy bridges, which are also unlisted but additionally obscure their traffic so the traffic does not look like a Tor. As far as I know, we don't provide packages for this yet, but if you are technically inclined, you can set one up manually on Linux by following these instructions: https://www.torproject.org/projects/obfsproxy-instructions.html.en#instructions -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Canned Abuse Response
Nathaniel Suchy: > See: https://www.torproject.org/eff/tor-dmca-response.html.en and > https://trac.torproject.org/projects/tor/wiki//doc/TorExitGuidelines and > https://www.torservers.net/wiki/abuse/dmca See also: https://trac.torproject.org/projects/tor/wiki/doc/TorAbuseTemplates https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy > On Sat, Aug 4, 2018 at 2:04 PM wrote: > > > Hello, > > > > I'm just curious, does anyone happen to have a canned abuse response that > > contains the safe harbor provisions of the DCMA? I figured I would ask > > before I wrote up a really long email. > > > > Thanks, > > > > Conrad > > > > -- > > Conrad Rockenhaus > > Fingerprint: 8049 CDBA C385 C451 3348 776D 0F72 F2B5 26DA E93F > > Public Key: > > https://pgp.key-server.io/pks/lookup?op=get&search=0x0F72F2B526DAE93F > > https://www.rockenhaus.com > > -- > > Get started with GreyPony Anonymization Today! > > https://www.greyponyit.com > > > > > > ___ > > tor-relays mailing list > > tor-relays@lists.torproject.org > > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays > > > _______ > tor-relays mailing list > tor-relays@lists.torproject.org > https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays -- Mike Perry signature.asc Description: Digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Operator straw poll: Reasons why you use Tor LTS versions?
Hello relay operators, Thanks for running relays, and thanks to those of you who are moving off of our old EOL Tor versions. Thanks especially to those of you who moved directly to Tor 0.4.1! We would like to transition the LTS Tor to be for use in edge/client/non-relay infrastructure only. We would like to minimize even that use. We need to update our network protocols to improve performance, security, and to add defenses against DoS attacks. Doing this while supporting LTS for relays is expensive, especially when large backports must be done to keep the network safe. It also slows down the uniform deployment of performance improvements, which contributes to the frustrating user experience of abysmal-but-rare performance edge cases. Unfortunately, we still have something like 2500 relays on either Tor 0.2.9-LTS or Tor 0.3.5-LTS. What are the reasons for this? My guess is the top 5 most common responses are: 1. "I didn't know that Debian's backports repo has latest-stable Tor!" 2. "I didn't see the Tor Project repos mentioned in Tor's Relay docs!" 3. "I'm running a distribution that Tor Project doesn't have repos for." 4. "I rolled my own custom Tor from git and forgot about it." 5. "My relay machine was not getting any updates at all. Oops." Does anyone have a reason that they think many other relay operators also share? How can we fix that for you, or at least, how can we make it easier to run the very latest stable series Tor on your relay? -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Operator straw poll: Reasons why you use Tor LTS versions?
teor: >> On 5 Sep 2019, at 12:11, Mike Perry wrote: >> >> Unfortunately, we still have something like 2500 relays on either Tor >> 0.2.9-LTS or Tor 0.3.5-LTS. >> >> What are the reasons for this? My guess is the top 5 most common >> responses are: >> >> 1. "I didn't know that Debian's backports repo has latest-stable Tor!" >> 2. "I didn't see the Tor Project repos mentioned in Tor's Relay docs!" >> 3. "I'm running a distribution that Tor Project doesn't have repos for." >> 4. "I rolled my own custom Tor from git and forgot about it." >> 5. "My relay machine was not getting any updates at all. Oops." >> >> Does anyone have a reason that they think many other relay operators >> also share? > > 6. When I tried to update, it didn't work with my old config> 7. I need > features that only exist in older Tors > - I can think of Tor2web, there may be others Are these common? I feel like this is long-tail. I'm looking for most common reasons first. After we address the most common reasons, we can pick our favorite long-tail use cases and decide if those are worth forward-porting or back-porting individual features for. But not before the common cases are dealt with. That way lies madness, and no progress, ever. > 8. I am maintaining research or other patches against tor, and rebases >are difficult Again, common? I'm going to guess not common (or self-supporting), but this does feel like something we could measure by checking for git versions that don't make sense to us in the full descriptor archives. >> How can we fix that for you, or at least, how can we make it easier to >> run the very latest stable series Tor on your relay? > > The answers are probably something like: > 6. Provide better relay operator support, and direct me to those support >channels in the log messages, when my relay fails to launch +1 100%. I think this will go light years towards getting rid of non-LTS Tors and LTS tor's alike, regardless of reason. Then we can ask the remainder. > 7. Support old features for longer> 8. Stop refactoring so much code Nah. I'm not interested in these, even if populism demands them. Some shit needs to go away because it is not safe to keep around, and some stuff needs to be better organized to make it easier to improve. -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Operator straw poll: Reasons why you use Tor LTS versions?
Roman Mamedov: > > On Thu, 05 Sep 2019 02:11:00 +0000 > Mike Perry wrote: > >> 1. "I didn't know that Debian's backports repo has latest-stable Tor!" > > I only looked to backports when I get a warning on the metrics website that my > versions are not recommended. Aside from that, I thought that running LTS on > relays is actually beneficial, to prevent any newly introduced bugs in the > current latest versions from having an impact on the network infrastructure. We are moving towards relying on CI for finding functional bugs, and code review and static analysis for security issues. I don't believe that current LTS periods of time will necessarily provide better results for either of these classes of risk than investing in better CI and in other forms of diversity than just release version. However, I could see a middle ground where we shorten LTS timescales for the relay side, but don't eliminate them, as we work towards where we want to be with CI and security issue risk reduction (or other forms of diversity). >> 2. "I didn't see the Tor Project repos mentioned in Tor's Relay docs!" > > I was using them in the past, but then decided not to, as it's adding some > management overhead and also one more potential security weakpoint. These two strike me as being likely to be very high on the list of common reasons, when the choice is deliberately made. What can we do to reduce management overhead? Right now, there are quite a lot of separate pages pointing to different pieces of the steps of adding our repo. Is that the problematic piece? Are there additional things make it more of a hassle than it should be? Someone else mentioned ansible. Would an ansible playbook that add our repositories and their gpg keys make this easier? Or is it better just to keep the steps all on a single page? Where does the security weakpoint risk come from? Does apt-transport-tor/onion service repository availability help in your mind here? -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Malicious Tor relays - post-analysis after two months
On 10/3/20 6:38 AM, nusenu wrote: >> Me and several tor relay operator friends have questions about >> Malicious Tor exit nodes. How do you define a node as malicious ? > > In the particular case (at least the initial detection): Traffic manipulation > at the exit relays. > >> How bad is the situation now ? > > This group [1] is still rather active and at this point they run a 3 digit > number > of relays, but it is not the only malicious group that is active on the Tor > network and > might not even be the group I worry about the most. > > [1] > https://medium.com/@nusenu/how-malicious-tor-relays-are-exploiting-users-in-2020-part-i-1097575c0cac > >> Is there any other risk than ssl >> striping ? > > I think so, yes. > The good thing about ssl-stripping attacks is, that it is easy > to protect against and easy to detect (if you are aware). The catch is that > most users are probably not aware. > So when compared with all other types of attacks that malicious relays can > perform, > ssl-stripping is probably not the biggest worry. > >> After the long >> discussion on the tor relay mailing list, what will be implemented as >> a solution ? > > As far as I can see, nothing will change/be implemented in the near future > at the Torproject or Tor directory authority level. > > for Roger's (long term) plan see: > https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001 > linked from > https://blog.torproject.org/bad-exit-relays-may-june-2020 > > >> * is there / will there be things >> implemented as a conclusion of the "call for support for proposal to >> limit large scale attacks" ? > > Nothing came out of that thread. > >> * has it been possible to prepare / set >> up precautions to avoid this king of situation > > I don't think anything has been implemented to prevent or reduce the risk of > this from reoccurring. Unfortunately, our OODA loops[1] on all development and funding actions are devastatingly, catastrophically long. This is due in part to slow funding cycles, and in part due to an internal debate over Agile vs Waterfall methodology[2]. I am in the Agile camp. I believe that Agile will help us respond to things like this in hours, days, or at most weeks, rather than months and years. Agile is how I ran the Tor Browser development. We just signed a funding proposal that covers "network health", which in theory covers network scanning to find and respond to problems like this. However, the funding is scoped to scalability and performance work. It will be a little bit of a stretch to cover this type of exit scanning too, but at least we will have Tor Project staff allocated to this kind of work now. The proposal took 18 months of background planning from us, ~6 months of background research from me, and a couple months of proposal review, with one revision round. Because of these issues on both sides, it has literally been years since we identified this problem area, and got funding to act on it. The good news is we start Monday. 1. https://en.wikipedia.org/wiki/OODA_loop 2. https://www.seguetech.com/waterfall-vs-agile-methodology/ -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] Malicious Tor relays - post-analysis after two months
On 10/5/20 9:15 AM, Georg Koppen wrote: > Mike Perry: >> On 10/3/20 6:38 AM, nusenu wrote: >>>> Me and several tor relay operator friends have questions about >>>> Malicious Tor exit nodes. How do you define a node as malicious ? >>> >>> In the particular case (at least the initial detection): Traffic >>> manipulation at the exit relays. >>> >>>> How bad is the situation now ? >>> >>> This group [1] is still rather active and at this point they run a 3 digit >>> number >>> of relays, but it is not the only malicious group that is active on the Tor >>> network and >>> might not even be the group I worry about the most. >>> >>> [1] >>> https://medium.com/@nusenu/how-malicious-tor-relays-are-exploiting-users-in-2020-part-i-1097575c0cac >>> >>>> Is there any other risk than ssl >>>> striping ? >>> >>> I think so, yes. >>> The good thing about ssl-stripping attacks is, that it is easy >>> to protect against and easy to detect (if you are aware). The catch is that >>> most users are probably not aware. >>> So when compared with all other types of attacks that malicious relays can >>> perform, >>> ssl-stripping is probably not the biggest worry. >>> >>>> After the long >>>> discussion on the tor relay mailing list, what will be implemented as >>>> a solution ? >>> >>> As far as I can see, nothing will change/be implemented in the near future >>> at the Torproject or Tor directory authority level. >>> >>> for Roger's (long term) plan see: >>> https://gitlab.torproject.org/tpo/metrics/relay-search/-/issues/40001 >>> linked from >>> https://blog.torproject.org/bad-exit-relays-may-june-2020 >>> >>> >>>> * is there / will there be things >>>> implemented as a conclusion of the "call for support for proposal to >>>> limit large scale attacks" ? >>> >>> Nothing came out of that thread. >>> >>>> * has it been possible to prepare / set >>>> up precautions to avoid this king of situation >>> >>> I don't think anything has been implemented to prevent or reduce the risk >>> of this from reoccurring. >> >> Unfortunately, our OODA loops[1] on all development and funding actions >> are devastatingly, catastrophically long. This is due in part to slow >> funding cycles, and in part due to an internal debate over Agile vs >> Waterfall methodology[2]. I am in the Agile camp. I believe that Agile >> will help us respond to things like this in hours, days, or at most >> weeks, rather than months and years. > > If one has folks working on the topic, maybe. But that was and is not > the problem here. We did not have a bunch of engineers who messed up > their Waterfall model. We had and still don't have (as of me writing > this mail) anyone being assigned to work on that. > > So, Agile or whatever would not have helped us in that scenario. The waterfall-style RFP is exactly why it took two years between our discussions of the need for network health work, and our ability to allocate staff to it. To do the conception, initiation, analysis, and design, the performance proposal probably cost the organization somewhere between $150k-$250k, if we did a full accounting. We also relied heavily on volunteer expertise and input. This debate has happened many times in many industries. Here is another example: https://omgrfp.wordpress.com/category/omgrfp/ The Agile world is anti-dogmatic. There is no one true Agile. Here is a model that proposes breaking up the RFP phase into an Agile/Lean style discovery contract and main contract, to accommodate waterfall-style RFPs: https://www.agilebuddha.com/agile/agile-for-fixed-bid-projects/ With an Agile/Lean discovery contract, we would have had resources to perform some prototype discovery of the scope of the network scanning problem, and (re)ran some preliminary MVP scans ourselves. Instead, we had to shoot from the hip and wait. Meanwhile, evidence of the exit problem's severity was not actionable by us, due to related org and community issues of overwork and stress. That Agile/Lean model for contracts parallels the Agile model for development, as previously linked: https://www.seguetech.com/waterfall-vs-agile-methodology/ On Monday, we agreed to run the development of the performance contract in a more Agile way. Unfortunately, we still have to wind down the final deployment phases of other projects before we can spin up network health. We planned for this in the performance proposal tim
Re: [tor-relays] Collaborative Bad-Abuse-Sender Blocklist
On 9/28/20 1:54 PM, Matt Corallo wrote: > > Different folks have different views on abuse reports, and that's perfectly > OK. But "taking it up with list XYZ" isn't > going to change that (see discussion on NANOG a few months ago on this very > topic =D) - people are always going to have > their own views on who's responsibility it is to solve "abuse" (under their > current definition). My personal abuse > policy is "I reach try to help you, but if you keep sending the same > automated stuff over and over and don't reply when > I reach out, I drop your mails". I figure there are several Tor exit node > operators with similar policies and > collaborating on such blocklists would save all of us with similar policies > time. > Absolutely. I suspect the problem is ideological. The abuse resolution camp seems to be largely subscribe to the "force ISPs to identify abusers and ban them" model. They do not want to hear about mitigation strategies or alternatives, other than their banhammer and abuse notice spamming approaches. Making a banlist of banhammer spammers like that is a brilliant move. I grew so tired of my personal email sever constantly ending up in DNSRBLs for no reason (even with DKIM and SPF), that after 20 years of DIY email, I was forced to moved to paid provider. This model is broken, its assumptions are contrary to our values, and it serves to support the business interests of tech oligarchs that believe that the world should be run by a handful of oligarchical ISPs and email providers, with government-issued identity for all. Fuck that. Good luck, Matt! Thanks for being awesome! P.S. Your mails ended up in my provider's spam filter. Dug them out for great justice ;) -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] BadExit: Rerouting exit relays detected (1) 45.63.11.98
On 10/11/20 10:20 AM, nusenu wrote: > Thanks for the report, I have forwarded it for removal. > > li...@for-privacy.net: >> Wtf, this exit has addresses that do not belong to it! >> https://metrics.torproject.org/rs.html#details/385527185E26937D05E0933DD29FF1699056CAF3 > > Yes, rerouting exit traffic is a practice we have observed in the past. > > BadExit: Rerouting exit relays detected (1) > The following exit relays are routing their traffic back into the tor network: > --- > nickname: exitnew > First seen: 2020-09-25 12:00:00 > Consensus weight: 1410 > AS: Choopa, LLC > OR IP address: 45.63.11.98 > Exit addresses: 185.140.53.7 185.220.101.207 45.154.35.219 45.63.11.98 > 51.158.111.157 > https://atlas.torproject.org/#details/385527185E26937D05E0933DD29FF1699056CAF3 > > > >> I'm very sure there are only nifty rabbits on the 185.220.101.0/24 subnet! > > niftybummy has relays outside of 185.220.101.0/24 I am losing patience with the "let's play nice and let exit IP addresses be predictable" model... We are not being treated well by the banhammer brigade, and it might be time to flip some tables. I would not call simply using a different exit IP than your relay's OR port a bad exit. However, re-routing exit traffic back into Tor like this is not the answer. It is simply wasteful. I am in favor of delisting such relays. Remember that our directory authorities are deliberately independent from TPI though, and even what I think is not necessarily what TPI thinks. The dirauths may have different opinions. Coordinating policy of this nature is difficult and requires consensus building. Again, I understand your frustration. -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] BadExit: Rerouting exit relays detected (1) 45.63.11.98
On 10/11/20 1:17 PM, nusenu wrote: >> I am losing patience with the "let's play nice and let exit IP addresses >> be predictable" model... We are not being treated well by the banhammer >> brigade, and it might be time to flip some tables. I would not call >> simply using a different exit IP than your relay's OR port a bad exit. > > I'm not calling exit relays using distinct IPs or inbound (OR) and outbound > connections "BadExits" either, quite the opposite, all exits should be using > https://2019.www.torproject.org/docs/tor-manual.html.en#OutboundBindAddressExit > if they have spare IPs. > That is why I implemented and automated that configuration in relayor. Ok that sounds reasonable. Thanks! > I believe I can tell rerouting exits from exits having distinct IPs for > inbound and outbound connections - in most cases. Are your scanners available for others to run? I understand that it is a risk that making them public may allow bad exits to avoid them, but is it ok if other specific people use and adapt the scanners? >> Remember that our directory authorities are deliberately independent >> from TPI though, and even what I think is not necessarily what TPI >> thinks. The dirauths may have different opinions. Coordinating policy of >> this nature is difficult and requires consensus building. > > Since dir auths have been removing these kinds of relays, I don't think there > is any policy change necessary. Ok great! Sometimes I am surprised by their decisions, and I didn't see this one. -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] rerouting exits
On 10/11/20 3:08 PM, nusenu wrote: >> Are your scanners available for others to run? I understand that it is a >> risk that making them public may allow bad exits to avoid them, but is >> it ok if other specific people use and adapt the scanners? > > You don't need to actively perform scans (in the sense of establishing > circuits) > to detect rerouting exits, onionoo provides you with the required data: > OR IP: > https://metrics.torproject.org/onionoo.html#details_relay_or_addresses > Exit IPs: > https://metrics.torproject.org/onionoo.html#details_relay_exit_addresses I meant the code for your other scans. We have my original scanner (part of torflow repo), and one phw wrote, and another set of onion service attack scanners. TPI might consider also running your scanners in addition to or instead of some of these. Plus more people running scanners may mean faster results and easier result confirmation... Though, this is subject to obvious issues with this being an arms race, if scans are discovered, of course. I also agree with your ticket about the time rotation feature. And I'm not sure we should necessarily publish this info anymore. I think this and similar ideas should be explored. We're trying to figure out how to put it all together into an approach that makes sense. -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] growing guard probability on exits (2020-10-15)
On 10/16/20 3:49 AM, nusenu wrote: > lets see when this graph stops growing > https://cryptpad.fr/code/#/2/code/view/1uaA141Mzk91n1EL5w0AGM7zucwFGsLWzt-EsXKzNnE/present/ Yes let's keep an eye on this. I doubt it is directly related, but it could be a side effect. However, I suspect that the KIST change will most affect Guards, especially those used by loud clients. It will allow them to handle much more traffic from loud clients, and probably get higher consensus values as a result. > why is this relevant? > It puts more entities into an end-to-end correlation position than there used > to be > https://nusenu.github.io/OrNetStats/#tor-relay-operators-in-end-to-end-correlation-position I share this concern. It seems plausible and even likely to me that Exits are more likely to be surveilled than non-Exits, which makes them more dangerous to use in both entry and exit positions. Additionally, the use of an Exit in the Guard position leaks information, since you will never use that Exit to connect anywhere, and this is visible over a long period of time, leading to Guard discovery. I want to remove the ability for Exits to become Guards entirely. In addition to the correlation and Guard discovery issues, it has historically caused much excess complexity for load balancing. If Exits can't also become Guards, the load balancing equations become way more legible and no longer have the "poorly defined constraint" problem. This means the complicated scarcity cases from the solution go away: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/265-load-balancing-with-overhead.txt#L51 > and it might also decrease exit traffic on exits when a tor client > chooses an exit as guard Hrm... this will be a function of how many clients choose that Exit. This process will take months, because of the long guard rotation period. If we keep flapping in and out of Exits-as-Guards, they are unlikely to accumulate many clients. The guard rotation period is another source of load balancing pain. For outstanding issues with our attempt at solving it, see: https://gitlab.torproject.org/tpo/core/tor/-/issues/16255 -- Mike Perry signature.asc Description: OpenPGP digital signature ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] cases where relay overload can be a false positive
On 1/12/22 5:36 PM, David Goulet wrote: On 01 Jan (21:12:38), s7r wrote: One of my relays (guard, not exit) started to report being overloaded since once week ago for the first time in its life. The consensus weight and advertised bandwidth are proper as per what they should be, considering the relay's configuration. More than this, they have not changed for years. So, I started to look at it more closely. Apparently the overload is triggered at 5-6 days by flooding it with circuit creation requests. All I can see in tor.log is: [warn] Your computer is too slow to handle this many circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted exit policy. [68382 similar message(s) suppressed in last 482700 seconds] [warn] Your computer is too slow to handle this many circuit creation requests! Please consider using the MaxAdvertisedBandwidth config option or choosing a more restricted exit policy. [7882 similar message(s) suppressed in last 60 seconds] This message is logged like 4-5 or 6 time as 1 minute (60 sec) difference between each warn entry. After that, the relay is back to normal. So it feels like it is being probed or something like this. CPU usage is at 65%, RAM is at under 45%, SSD no problem, bandwidth no problem. Very plausible theory, especially in the context of such "burst" of traffic, we can rule out that all the sudden your relay has become facebook.onion guard. Metrics port says: tor_relay_load_tcp_exhaustion_total 0 tor_relay_load_onionskins_total{type="tap",action="processed"} 52073 tor_relay_load_onionskins_total{type="tap",action="dropped"} 0 tor_relay_load_onionskins_total{type="fast",action="processed"} 0 tor_relay_load_onionskins_total{type="fast",action="dropped"} 0 tor_relay_load_onionskins_total{type="ntor",action="processed"} 8069522 tor_relay_load_onionskins_total{type="ntor",action="dropped"} 273275 So if we account the dropped ntor circuits with the processed ntor circuits we end up with a reasonable % (it's >8 million vs <300k). Yeah so this is ~3.38% drop so it immediately triggers the overload signal. > So the question here is: does the computed consensus weight of a relay change if that relay keeps sending reports to directory authorities that it is being overloaded? If yes, could this be triggered by an attacker, in order to arbitrary decrease a relay's consensus weight even when it's not really overloaded (to maybe increase the consensus weights of other malicious relays that we don't know about)? Correct, this is a possibility indeed. I'm not entirely certain that this is the case at the moment as sbws (bandwidth authority software) might not be downgrading the bandwidth weights just yet. But regardless, the point is that it is where we are going to. But we have control over this so now is a good time to notice these problems and act. I'll try to get back to you asap after talking with the network team. My thinking is that sbws would avoid reducing weight of a relay that is overloaded until it sees a series of these overload lines, with fresh timestamps. For example, just one with a timestamp that never updates again could be tracked but not reacted to, until the timestamp changes N times. We can (and should) also have logic that prevents sbws from demoting the capacity of a Guard relay so much that it loses the Guard flag, so DoS attacks can't easily cause clients to abandon a Guard, unless it goes entirely down. Both of these things can be done in sbws side. This would not solve short blips of overload from still being reported on the metrics portal, but maybe we want to keep that property. Also, as a side note, I think that if the dropped/processed ratio is not over 15% or 20% a relay should not consider itself overloaded. Would this be a good idea? Plausible that it could be better idea! Unclear what an optimal percentage is but, personally, I'm leaning towards that we need higher threshold so they are not triggered in normal circumstances. But I think if we raise this to 20% let say, it might not stop an attacker from triggering it. It might just make it that it is a bit longer. Hrmm. Parameterizing this threshold as a consensus parameter might be a good idea. I think that if we can make it such that an attack has to be "severe" and "ongoing" long enough such that a relay has lost capacity and/or lost the ability to complete circuits, and that relay can't do anything about it, that relay unfortunately should not be used as much. It's not like the circuit will be likely to succeed or be fast enough to use in that case anyway. We need better DoS defenses generally :/ -- Mike Perry ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Re: [tor-relays] cases where relay overload can be a false positive
On 1/23/22 5:28 PM, s7r wrote: Mike Perry wrote: We need better DoS defenses generally :/ Of course we need better defense, DoS is never actually fixed, no matter what we do. It's just an arms race the way I see it. Well, I am extremely optimistic about https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/327-pow-over-intro.txt Despite random internet hate on PoW, in the context of onion service DoS (which is our main cause of overall network DoS and actual overload), it looks very likely to be an effective option. Prop325 describes how we can use PoW to build a system that auto-tunes itself such that only circuits with a sufficient level of PoW succeed. This means that we won't use PoW at all, unless it is needed (ie: only require the level of PoW needed to jump the queue of DoS requests, and advertise this level in the service descriptor). In this way, an attack can be made considerably more expensive (and most importantly: less profitable) for an attacker, with the only effect that clients wait a bit longer to connect, to solve the PoW, and only while an attack is ongoing. If this system is effective enough, DoS-for-ransom attacks should vanish due to low profitability. And because the system auto-tunes, we should also no longer need PoW at all, after it is deployed, because of this deterrent effect. Win-win-win. Jamie Harper did an implementation of this proposal: https://github.com/jmhrpr/tor-prop-327 David, asn, and I did a prelim code review of that branch, and while it needs more unit tests, and we need to reduce some of its external dependencies, it was of surprisingly high code quality. Unfortunately, Jamie graduated and moved on to other things. But if we reduce the consensus weight or assume at network level that relay X should be used less because of a super tiny percent of dropped > circuits we could end up in wasting network resources on one side, and on the other side, maybe granting better probability chances for evil relays that we have not discovered yet to grab circuits. A consensus parameter is of course appropriate here, maybe 20% is a big threshold and should be less, but right now even 0.1% is reported and treated as overload, IMO this is not acceptable. I generally agree here. For this, I filed: https://gitlab.torproject.org/tpo/core/tor/-/issues/40560 An important detail here is that ntor drops *are also* a form of that same kind of bias away from attacked relays, and toward evil relays. If a relay is under such heavy DoS such that it is already dropping X% of ntors, it is *already* being forced to not carry X% of circuits. So, at minimum, it is reasonable to give it X% less traffic. However, this reduction must also be subject to the limits of not stripping the guard flag away just for overload/DoS, as I mentioned in my previous post. At any rate, before we even consider using overload-general for relay weighting, we first need to transition the bwauths to use congestion control, and monitor how that system behaves wrt reducing overload, before we also try to add in overload-general. So it will be a while, and we will have time to thing about and analyze this stuff further. -- Mike Perry ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
[tor-relays] Ext Relay Operators: Please Upgrade to 0.4.7.7!
Tor 0.4.7.7-stable contains a very important performance improvement, called Congestion Control. You can read more about this improvement here: https://blog.torproject.org/congestion-contrl-047/ The TL;DR is that users of Tor 0.4.7 will experience faster performance when using Exits or Onion Services that have upgraded to 0.4.7. We have packages in available for Debian, Ubuntu, Fedora, CentOS, and BSD: - Debian: https://support.torproject.org/apt/tor-deb-repo/ - Ubuntu: https://support.torproject.org/relay-operators/operators-4/ - Fedora/CentOS: https://support.torproject.org/rpm/tor-rpm-install/ - BSD: https://lists.torproject.org/pipermail/tor-relays/2022-May/020528.html We would like to have a large fraction of Exit relays upgraded before our next Tor Browser Stable Release, on May 31st. Please let us know if you have any problems upgrading to this release. Additionally, while non-Exit relays do not need to upgrade, they will notice the effects of congestion control. All relay operators who pay for bandwidth by the gigabyte may want to consider enabling hibernation, to avoid surprise cost increases: https://support.torproject.org/relay-operators/limit-total-bandwidth/ All relays may also experience higher CPU usage. If this is a problem, rate limiting relay bandwidth will also help: https://support.torproject.org/relay-operators/bandwidth-shaping/ We also recently fixed an issue with overload reporting in 0.4.6.10 and 0.4.7.7: https://gitlab.torproject.org/tpo/core/tor/-/issues/40560 This should mean that there are far fewer false positives in the overload reporting on https://metrics.torproject.org/rs.html. If after upgrading to either of those versions, you still see overload, please see: https://support.torproject.org/relay-operators/relay-bridge-overloaded/ P.S. There is a known warn bug with vanguards-lite on first startup. It is harmless: https://gitlab.torproject.org/tpo/core/tor/-/issues/40603 -- Mike Perry ___ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays