In your Protocol, I would implement a specific Error that
might be returned by the server in such burst situations.
The clients should interpret that error in a way that lets
them reconnect to the server after a randomly chosen interval.
This way you immediately eliminate the burst without running
into the situation where you might get another and another and
another…
A good help for that might be the TR-069 protocol
specification, which deals with HTTP connections from 1 server
to millions (!) of clients:
Hi
all,
Before I start digging in the Twisted code, I'd just like
to bump this off you in case the solution is obvious...
I have a lot of clients having permanent connections to my
TCP server. These clients are devices that will buffer
data if it can't connect to the server. I can see a
possible problem in the future if/when for whatever reason
there is downtime on my server or the network and all
these devices start connecting and transmitting their
buffered data once the server is back up, potentially
causing server flooding.
What would be a good area to start looking into preventing
something like this from happening? My first thoughts are
to simply limit an X number of new connections per minute
(or per X seconds) and to simply immediately drop new
connections if it exceeds that limit. (I'd probably
implement this on Protocol level). Over time the
connections should theoretically normalise as the buffered
data on the devices are also limited.
Of course it would be better to not allow the connections
from being established in the first place if the limits
are exceeded (will be great for DDOS protection), but I
have a feeling that might be difficult to achieve.
Any thoughts/tips or even links to examples?
Kind Regards,
Don
_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python