On Fri, Jan 10, 2014 at 11:03:12AM -0800, Andrew Hume wrote:
> we (i and a colleague) have a system where we probe a large number of
> IP addresses out there in the Internet.  A probe consists of opening
> a http URL on that system and when we get back the (expected) 401
> response, we close the connection. about 7% of the time, we get no response
> and we close the connection after several seconds.
> 
> for more or less any Intel-style server, we can run about 500-1000
> probes per second (via a driver program with 50-100 threads).
> 
> here’s the unexpected part: we go to a cloud vendor and run this program
> in a few instances, but no matter how many instances we run, once we get
> to about 1000 probes/s total, additional instances or driving rates do not
> increase the total rate, but simply increase packet loss and the number of 
> calls
> from the cloud vendor.
> 
> we’ve tried this on SIlverLining (an AT&T cloud) and RackSpace, with
> pretty much identical results. we’ve hypothesized this as a limit of a 
> site-wide
> NAT box or some such, but it seems unlikely. before we go repeat this 
> experiment
> with other cloud vendors, does anyone have any comments on what this might be
> or if other vendors might do better or worse?
> 

Ah-hah!

http://blog.quarkslab.com/tcp-backdoor-32764-or-how-we-could-patch-the-internet-or-part-of-it.html

quote:

Using a classical Linux system, a first implementation using the
select function and a five seconds timeout would perform at best
~1000 tests/second. The limitation is mainly due to the fact
that the FD_SETSIZE value, on most Linux systems, is set by
default to 1024. This means that the maximum file descriptor
identifier that select can handle is 1024 (and thus the number
of descriptors is inferior or equal to this limit). In the end,
this limits the number of sockets that can be opened at the same
time, thus the overall scan performance.

Fortunately, other models that do not have this limitation
exist. epoll is one of them. After adapting our code, our system
was able to test about 6k IP/s (using 30k sockets
simultaneously). That is less than what masscan and/or zmap can
do (in terms of packets/s), but it gave good enough performance
for what we needed to do.


unquote


I think that's your problem.

-dsr-
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to