[Twisted-Python] gRPC support in Twisted Python

2016-09-28 Thread Nursimulu, Khen
Hello,

Is there a plan (or an implementation) to support gRPC within Twisted Python?   
My understanding is that gRPC is built using Futures and creates its own 
threads for all its event handling.  There is also a gRPC Python package 
(grpcio 1.0.0) that is available for python 2.7.In order to use gRPC with 
Twisted Python in 2.7 is the only way to have gRPC run in its own thread?

Thanks
Khen
___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] gRPC support in Twisted Python

2016-09-28 Thread Glyph Lefkowitz

> On Sep 28, 2016, at 6:13 AM, Nursimulu, Khen  wrote:

> Is there a plan (or an implementation) to support gRPC within Twisted Python? 
>   My understanding is that gRPC is built using Futures and creates its own 
> threads for all its event handling.  There is also a gRPC Python package 
> (grpcio 1.0.0) that is available for python 2.7.In order to use gRPC with 
> Twisted Python in 2.7 is the only way to have gRPC run in its own thread?

There's no plan that I'm aware of.  You could definitely run gRPC in a thread 
currently, although it would be nice if grpc worked natively with Twisted.

Probably contributing this upstream into the gRPC project would be the best way 
to start, and if they're not receptive, starting a separate 'txgRPC' project.

-glyph

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] gRPC support in Twisted Python

2016-09-28 Thread Nursimulu, Khen
Thanks Glyth for the prompt response.

From:  on behalf of Glyph Lefkowitz 

Reply-To: "twisted-python@twistedmatrix.com" 
Date: Wednesday, September 28, 2016 at 2:15 PM
To: "twisted-python@twistedmatrix.com" 
Subject: Re: [Twisted-Python] gRPC support in Twisted Python


On Sep 28, 2016, at 6:13 AM, Nursimulu, Khen 
mailto:knurs...@ciena.com>> wrote:


Is there a plan (or an implementation) to support gRPC within Twisted Python?   
My understanding is that gRPC is built using Futures and creates its own 
threads for all its event handling.  There is also a gRPC Python package 
(grpcio 1.0.0) that is available for python 2.7.In order to use gRPC with 
Twisted Python in 2.7 is the only way to have gRPC run in its own thread?

There's no plan that I'm aware of.  You could definitely run gRPC in a thread 
currently, although it would be nice if grpc worked natively with Twisted.

Probably contributing this upstream into the gRPC project would be the best way 
to start, and if they're not receptive, starting a separate 'txgRPC' project.

-glyph

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] gRPC support in Twisted Python

2016-09-28 Thread Werner Thie

On 9/28/16 9:13 AM, Nursimulu, Khen wrote:

Thanks Glyth for the prompt response.



On Sep 28, 2016, at 6:13 AM, Nursimulu, Khen mailto:knurs...@ciena.com>> wrote:

Is there a plan (or an implementation) to support gRPC within
Twisted Python?   My understanding is that gRPC is built using
Futures and creates its own threads for all its event handling.
There is also a gRPC Python package (grpcio 1.0.0) that is available
for python 2.7.In order to use gRPC with Twisted Python in 2.7
is the only way to have gRPC run in its own thread?

There's no plan that I'm aware of.  You could definitely run gRPC in a
thread currently, although it would be nice if grpc worked natively with
Twisted.

Probably contributing this upstream into the gRPC project would be the
best way to start, and if they're not receptive, starting a separate
'txgRPC' project.



Interesting, with the browser implementation in the works this could 
become a full replacement of nevow/athena


Werner

___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python


Re: [Twisted-Python] How do you determine the buffer size of a transport - a use-case for not using back pressure

2016-09-28 Thread Glyph Lefkowitz
Hi Steve,

It looks like I had marked this message as interesting and warranting a reply, 
but never got around to it.  I'm sorry it's been quite a while!  I appreciate 
the amount of research you did here :-).

> On Aug 17, 2016, at 3:43 PM, Steve Morin  wrote:
> 
> Twisted Community
> 
> Problem: How do you determine the buffer size of a transport, to know how 
> much data is waiting to be transmitted from using transport.write?
> 
> Wait! You're going to say: use the Producer Consumer API ( 
> http://twistedmatrix.com/documents/current/core/howto/producers.html 
>  )

This is, unfortunately, the only solution :).

> To do what: So that instead of using back pressure I can check the buffer and 
> when it's "too big/full" can decide to do something to the transport I am 
> writing to:

I think when you say "back pressure" you're referring to your program exerting 
back-pressure on its peer.  I understand why you don't want to do that.  
However, there's another kind of back pressure - your peer exerting back 
pressure on your program.

Commensurately, there are two ways to use back pressure:

To exert back pressure on your peer, call `self.transport.pauseProducing()`.  
Later, when you're ready to receive more data, call 
`self.transport.resumeProducing()`.  This is what you don't want to do.
To detect when back pressure is applied from your peer, call 
`self.transport.registerProducer(self, True)`; then the reactor will call 
pauseProducing() when its buffer is full and and resumeProducing() when it 
empties out again.

Your list of things you might want to do here:

> - Buffer to disk instead of memory
> - Kill the transport
> - Decide to skip sending some data
> - Send an error or message to the transport I am writing to
> - Reduce the resolution, increase the compression (things like video or audio)

is a good one, and all these things can be achieved.  Going through them:

If you want to buffer to disk instead of memory, have a method like:

def someDataToSendToPeer(self, someData):
if self.isProducing:
self.transport.write(someData)
else:
self.bufferFile.write(someData)

def pauseProducing(self):
self.isProducing = False
self.bufferFile = open("buffer.file", "wb")

def resumeProducing(self):
self.isProducing = True
self.startUbufferingFromFile()

If you want to kill the transport,

def pauseProducing(self):
self.transport.abortConnection()

If you want to reduce video stream quality,

def streamSomeRawVideo(self, someRawVideo):
if self.isProducing:
self.transport.write(self.videoBuffer.addAndEncodeToBytes(someRawVideo))
else:
self.videoBuffer.addAndCompressSomeMore(someRawVideo)

and so on, and so on.

Basically, you can treat the buffer as "empty" until pauseProducing() is 
called.  Once it is, you can treat it as "full".

Hope this was helpful, and still timely enough for you to make some use of it 
:).

-glyph___
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python