I assume your RPC is unary type (correct me if it's not the case), you can (1) use NettyServerBuilder.maxConcurrentCallsPerConnection() to limit the number of concurrent calls per client channel; (2) in server application implementation, send response slowly if possible (e.g. sleep a little bit before sending out the response when server is too busy). To limit the total number of connections to the server, the discussion in https://github.com/grpc/grpc-java/issues/1886 may help. On Friday, January 17, 2020 at 1:42:28 AM UTC-8 [email protected] wrote:
> Apache Ratis is a java implementation of RAFT consensus protocol and uses > grpc as a transport protocol. Currently we have multiple clients connecting > to a server. The server has limited resources available to handle the > client requests and it fails the requests which it can not handle. These > resources are in the application layer. Since client requests can be large > in size, failing them creates a lot of garbage. We wanted to push back the > clients until resources become available without creating a lot of garbage. > Based on my understanding flow control in grpc works by controlling the > amount of data buffered in the receiver. In our use case we want the server > to have not more than x number of requests which it has to process. Lets > assume that the server enqueues the requests it receives in a queue for > processing them(I don't think the isReady control would work in this > scenario?). Is it possible for server to limit the number of requests it > receives from the clients? Is it possible for server to stop receiving data > from the socket? > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/4aff8540-f521-4a39-b088-09656955a386%40googlegroups.com.
