On 05/05/2017 01:10 PM, Cory Benfield wrote:
The first is that Twisted will break your code eventually. Private member 
attributes are not covered by Twisted’s deprecation policy, and they can be 
changed without warning for any reason. So you’ll need to pin your Twisted 
version.
I feel myself unconfortable with this, that's why we are corresponding. :)
As a second note, you may lock yourself out of HTTP/2. HTTP/2 is not guaranteed 
to give you access to a raw transport object (though it might), because in 
HTTP/2 the protocol is not a dumb byte pipe like it is in HTTP/1.1. Code like 
this forces Twisted devs who want to add HTTP/2 support (like myself) to 
implement HTTP/2 as a multiple-object abstraction to allow each 
request/response pair’s underlying “transport” member to act like a dumb 
byte-pipe transport, when we’d much rather use a less complex abstraction (as 
an example you should look at the HTTP/2 server code in twisted.web, which has 
multiple classes to maintain this fiction that you can just call 
“transport.write” and expect that to work).
Having HTTP/2 (along with 1.1) of course would be the best, but currently I can easily live without it. It's far from being standard. And yet, its multiplexing would be one of the greatest achievement here (if correctly implemented). Copying objects with a lot of HTTP/TCP channels is too stressful sometimes (too much connections, TIME_WAIT problems etc).
However, you’re right that this is not ideal. I think the best solution would 
be an enhancement to twisted.web that updates the default Response object to 
accept an IConsumer as the protocol argument of deliverBody. This would allow 
t.w._newclient.Response to be the arbiter of what it means to “pause” 
production, and allow you to continue to proxy between the two but without 
accessing a private member (you’d get given the producer you need to pause in 
registerProducer).

If that’s an enhancement you’d be interested in, I can work with you to get 
that patch in place. Then your code would change a bit (note that this code 
won’t work right now):
Absolutely. I think this use case is far from being brain-dead, so if it's possible to do it right out of the box, I guess everybody wins with it.


class UploadProducer(protocol.Protocol):
     implements(IBodyProducer)
     implements(IConsumer)

     def __init__(self, get_resp):
         self.length = get_resp.length
         self.producing = False
         self._producer = None
         self._consumer = None
         self._completed = Deferred()

     # IConsumer
     def registerProducer(self, producer, streaming):
         assert streaming
         self._producer = producer
         if self._consumer is None:
             self._producer.pauseProducing()

     def unregisterProducer(self):
         # Raise an error or something
         pass

     def write(self, data):
         self._consumer.write(data)

     # IProtocol
     def connectionLost(self, reason):
         self._completed.callback(reason)
# IBodyProducer
     def startProducing(self, consumer):
         if self._producer is not None:
             self._producer.resumeProducing()
         self._consumer = consumer
         return completed

     def resumeProducing(self):
         self._producer.resumeProducing()
def pauseProducing(self):
         self._producer.pauseProducing()
def stopProducing(self):
         self._producer.stopProducing()


@inlineCallbacks
def copy(src, dst):
     get_resp = yield treq.get(src, unbuffered=True)
     print "GET", get_resp.code, get_resp.original
     producer = UploadProducer(get_resp)
     get_resp.deliverBody(producer)
put_resp = yield treq.put(dst,data=producer)
     print "PUT", put_resp, put_resp.code
This looks much clearer than Phil's solution and lacks the error-prone custom buffering, which is nice.
What can I do to make this happen? :)
With this arrangement as well it’d potentially be possible to use something 
like tubes, or at least get closer to using tubes for this use case. Right now 
it’s a bit of an annoyance that t.w._newclient doesn’t allow the body receiving 
protocol to exert backpressure on the data.
Apart from correctness having some traces of performance would also be good. I don't know how tubes compare to this, but the current (not nice) solution can easily transfer more than one gigabit/s with one process, I consider that a good baseline. :)
Anyway, just a thought.

Thank you very much for joining and your help.

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to