Oops. Sorry. Wrong thread. That response is actually for this issue <https://github.com/grpc/grpc-java/issues/2029> instead.
For your actual issue, I'm still wondering what version of grpc-java you're using. Also, are you using any proxy between the client and server? As a shot-in-the-dark, you may try updating to 3.0.0-pre1, but I'm not aware of anything specifically that I would expect to fix your issue, unless you are using a proxy that adds in padding (rare). On Mon, Aug 1, 2016 at 10:18 AM, Eric Anderson <[email protected]> wrote: > It looks like isCancelled() is a bit useless right now, since it's value > changes too late <https://github.com/grpc/grpc-java/issues/2112>. Thanks > for bringing this to our attention. Sorry it took so long for me to look at > it. > > On Mon, Jul 25, 2016 at 4:55 PM, ran.bi via grpc.io < > [email protected]> wrote: > >> I just noticed that the problem only happens when I make multiple large >> server->client streaming RPC simultaneously. >> I haven't been able to reproduce the issue in a single thread. >> >> Could it be a race condition on the client side? >> >> >> On Monday, July 25, 2016 at 4:21:49 PM UTC-7, [email protected] wrote: >>> >>> Anyone has any idea about the potential cause? >>> >>> On Tuesday, July 19, 2016 at 3:43:40 PM UTC-7, [email protected] wrote: >>>> >>>> 1.2 millions protos, and the total serialized data is a few GB I think. >>>> The client thread is blocked here >>>> >>>>> - waiting on <0x7552c6db> (a java.util.concurrent.locks.Abs >>>>> tractQueuedSynchronizer$ConditionObject) >>>>> - locked <0x7552c6db> (a java.util.concurrent.locks.Abs >>>>> tractQueuedSynchronizer$ConditionObject) >>>>> at sun.misc.Unsafe.park(Native Method) >>>>> at java.util.concurrent.locks.LockSupport.park(LockSupport.java >>>>> :175) >>>>> at java.util.concurrent.locks.AbstractQueuedSynchronizer$Condit >>>>> ionObject.await(AbstractQueuedSynchronizer.java:2039) >>>>> at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlocking >>>>> Queue.java:442) >>>>> at io.grpc.stub.ClientCalls$ThreadlessExecutor.waitAndDrain(Cli >>>>> entCalls.java:499) >>>>> at io.grpc.stub.ClientCalls$BlockingResponseStream.waitForNext( >>>>> ClientCalls.java:421) >>>>> at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(Clie >>>>> ntCalls.java:434) >>>> >>>> >>>> >>>> On Tuesday, July 19, 2016 at 3:28:37 PM UTC-7, Eric Anderson wrote: >>>>> >>>>> On Tue, Jul 19, 2016 at 12:20 PM, ran.bi via grpc.io < >>>>> [email protected]> wrote: >>>>> >>>>>> I don't think that is the problem in my case. >>>>>> My server is basically like >>>>>> >>>>>> for all data >>>>>> stream.onNext(data) >>>>>> stream.onComplete() >>>>>> >>>>>> I don't see any chance to cause the problem you described. >>>>>> >>>>> >>>>> Hmm... How much data are we talking here (total)? What version of >>>>> grpc-java are you using? >>>>> >>>> -- >> You received this message because you are subscribed to the Google Groups >> "grpc.io" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To post to this group, send email to [email protected]. >> To view this discussion on the web visit https://groups.google.com/d/ms >> gid/grpc-io/ec7d7406-18da-424e-afb8-4fd776a62377%40googlegroups.com >> <https://groups.google.com/d/msgid/grpc-io/ec7d7406-18da-424e-afb8-4fd776a62377%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oOGSV3LRBsjYi%3DM59Y64m9Jzrrj4Wtev19z%3DipuuhcdFg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
