On Sat, Sep 08, 2012 at 07:59:03PM +0200, Fabio Pietrosanti (naif) wrote: > That step come while brainstorming with hellais and vecna about a file > upload system for big files that should try to optimize at maximum the > transfer over a TorHS with Javascript /web browser.
Tricks like this make sense when the bottleneck is at the edges of the network. But when the bottleneck is in the core of the network (which is the case with Tor, since it has way too much load compared to the relay capacity), then trying to 'optimize' your transfer by pretending to be multiple people is an arms race that just makes things worse. > While making big files transfer over a TorHS, there is the risk to went > into very low-bandwidth circuit or even unstable circuits due to the > length of a TorHS connection path (7 hops right?). 6 hops. But yes, this is a risk. We're trying to resolve it in general with both a) the relay consensus weights (to shift traffic to the relays that can handle it better) and b) the circuit-build-timeout computation (where you as a client discard the slowest 20% of your circuits). If everybody moved to the fastest 20% of the relays, though, it would be worse than what it is now. Good load balancing means using the slower relays too, just less often. (But not *too* slow: https://trac.torproject.org/projects/tor/ticket/1854 ) > The idea is: > - to split the files that need to be transferred in chunk of fixed size > - then send that chunks over multiple sockets > - every new chunk has to be sent, open a new socket trough a different > Socks port (so trough a different tor circuit) I think you aren't considering how much cpu load is added by opening a new circuit. In this case (for hidden services), the circuit creation will be on-demand (rather than preemptive), which means you'll be waiting for each circuit to open (which involves many circuits, for a hidden service rendezvous) before you can use it. This latency you will experience is exactly the sort of thing that will get worse if people start overloading the network with extra circuits. This is another instance of the general problem that we see from a lot of researchers: "I think Tor is slow because of X. Therefore I will change my client behavior to do this other thing. It works better for me now. Therefore every client should make this change." That's why Rob and I have been pushing Shadow so much: https://shadow.cs.umn.edu/ since whole-Tor-network simulators are the right way to evaluate largescale client behavior changes. I wonder if it would be worthwhile to try to rig up a Shadow simulation to see results. My guess is that if my clients do it, it will ruin the network for the clients who don't; whether it also ruins the network for the clients that *also* do it remains to be seen. See also http://freehaven.net/anonbib/#throttling-sec12 While I'm at it, there *are* several steps that would lead to significantly improving hidden service performance: https://trac.torproject.org/projects/tor/ticket/1944 plus the various performance and security fixes in the 'Tor hidden service' category. I know it can be tempting to treat the Tor design and code as a black box and try to hack around it, but I think in this case the clear right thing to do is to make the code not suck so much. --Roger _______________________________________________ tor-talk mailing list tor-talk@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk