On Thu, Nov 2, 2017 at 11:26 AM, <[email protected]> wrote: > On my client, I am using Guava's RateLimiter > <https://google.github.io/guava/releases/22.0/api/docs/index.html?com/google/common/util/concurrent/RateLimiter.html> > to > send messages in a bi-di stream at 1000 per second. (All using a shared > channel and stub). > Each message I am sending in a Runnable() just to parallelize the work. >
That doesn't sound right, unless it is parallel across streams. onNext is
not thread-safe; you'd need to hold a lock when calling it from multiple
threads simultaneously. This isn't the cause of your problem, but it is a
problem.
Same behavior happens if I just call `onNext` directly without the task
> submission step.
> Code roughly looks like:
>
> final long startTime = System.currentTimeMillis();
>
> final long oneMinute = TimeUnit.MINUTES.toMillis(1);
>
> final RateLimiter rateLimiter = RateLimiter.create(1000);
>
> final StreamObserver<TestMessageRequest> requestObserver =
> client.asyncStub.testMessageRpc(client.replyObserver);
>
> while (System.currentTimeMillis() - startTime < oneMinute) {
> rateLimiter.acquire(1);
> threadPool.submit(() -> {
>
> TestMessageRequest request = TestMessageRequest.getDefaultInstance();
>
> requestObserver.onNext(request);
> });
> }
>
> It's not observing outbound flow control and you're sending messages
faster than they are consumed. This causes the buffered messages to consume
too much space. You need to use ClientCallStreamObserver.isReady() and
setOnReadyHandler(). Your replyObserver will need to
implement ClientResponseObserver in order to call setOnReadyHandler()
during beforeStart(). If isReady() is false, then try to pause sending to
avoid excessive buffering. When isReady() transitions from false back to
true, setOnReadyHandler() will be called.
See the ManualFlowControlClient
<https://github.com/grpc/grpc-java/blob/master/examples/src/main/java/io/grpc/examples/manualflowcontrol/ManualFlowControlClient.java>,
although you don't need to mess with disableAutoInboundFlowControl() and
request().
So anywhere from the 20-60 second mark my server throws a:
> SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$
> JumpToApplicationThreadServerStreamListener$1@48544a03
> [java] java.lang.OutOfMemoryError: GC overhead limit exceeded
>
> Am I doing something wrong? Is there any way to have the server support
> this high load.
>
Is that 1000 QPS total, and all over one stream? Using a single stream
can't use much more than a single core of processing (excluding protobuf
and the application), so you may use some more streams. But 1k QPS is
really low. We see 750k QPS
<https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584>
("Streaming
secure throughput QPS (8 core client to 8 core server)") between a client
and server with 8 cores each. Even with non-streaming RPCs we see 250k QPS.
--
You received this message because you are subscribed to the Google Groups
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oMZ9dbNHKs%3DEQaXJ1B_ZA9SS5TR4QvMMLSU3_%2B979KvhA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
smime.p7s
Description: S/MIME Cryptographic Signature
