I ended up restarting the RPC call every 60s, since a dedicated app ping 
RPC call would continue to work even though the long running RPC call state 
had been silently lost server side. It seems a bit inefficient.

Semantically speaking what is the intended behavior when a long running RPC 
call can't complete due to a network issue?  I couldn't find that case in 
the spec, and even in the happy error case where the server could send an 
RST_STREAM, the client didn't bubble up any indication to the app that the 
stream was lost. 

On Wednesday, December 1, 2021 at 10:39:47 AM UTC-8 [email protected] wrote:

> Having a HTTP/2 proxy in between is muddying the waters for keepalive. I 
> believe istio/envoy have settings for keepalives that you might be able to 
> employ here. If that doesn't work for you either, you might want to 
> consider a custom application level ping.
>
> On Tuesday, November 9, 2021 at 6:28:41 PM UTC-8 C. Schneider wrote:
>
>> Hi,
>>
>> For a chat service I have the client connect to a gRPC server running in 
>> Istio (and using FlatBuffers).
>>
>> When the server is shutdown the TCP connection remains connected (to 
>> Istio it appears) but the client doesn't detect the server went away, so 
>> continues to send Keep Alives thinking the server should be sending data 
>> eventually, but the server never will since its RPC call state was lost 
>> when it was shutdown.
>>
>> What is the expected RPC stream semantics in the case where the server 
>> goes away mid stream? Should the client be able to detect this and restart 
>> the RPC stream?
>>
>> Thanks!
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/67d54d69-6a9f-4ee6-a99e-b77b671c2619n%40googlegroups.com.

Reply via email to