Hi, I have an application that ingests data and does a reactor.spawnProcess() for each chunk of data (a product) to pass it as STDIN to a compiled binary and then I harvest the STDOUT. It has been working well, but I have an issue when my data rates get too high and the machine gets overloaded with spawned processes (I think) and starts running out of file descriptors (yes I can raise it :) causing all sorts of pain. I'm wondering about a mechanism to throttle the number of spawned processes going at one time? It'd be nice to only have 10 of these spawned processes going at any one time. Thanks for your ideas :)
daryl code snipet: class SHEFIT(protocol.ProcessProtocol): def __init__(self, tp): self.tp = tp self.data = "" def connectionMade(self): self.transport.write( self.tp.raw ) self.transport.closeStdin() def outReceived(self, data): self.data = self.data + data def errReceived(self, data): print "errReceived! with %d bytes!" % len(data) print data def outConnectionLost(self): really_process(self.tp, self.data) def got_product(): shef = SHEFIT( tp ) reactor.spawnProcess(shef, "shefit", ["shefit"], {}) def really_process(tp,data): print 'Do some work' _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python