Hi @Eric - I am performing similar benchmark for performance comparison 
between gRPC and REST and my results are similar to @Shobhit. Here is my 
repo for the code for your reference 
- https://github.com/pushpakmittal/grpc-rest-benchmark

I am too using jemeter to perform the benchmark.

I am observing that for same number of records in the response, REST is 
performing better in terms of latency.
On Tuesday, February 28, 2023 at 6:26:12 AM UTC+5:30 Eric Anderson wrote:

> I hate to sound like a broken record, but without the code, there's little 
> I can do. I don't know what you are benchmarking, so I can't explain the 
> results.
>
> I'm glad to hear that using a fixed thread pool helped, although I don't 
> know what the previous code was doing (whether you were using the default 
> or had a different executor). I will say that 100 threads is probably 
> excessive. A good starting point for non-blocking services is ½-2 times the 
> number of cores, depending on what else is going on. In your case, you're 
> sharing the single machine between client and server which makes it all the 
> harder to tune. I suggest running the client and server on different 
> machines.
>
> I'm not familiar with how the jmeter or the grpc plugin handle channels. 
> If it is using a single channel, then you will be limited to approximately 
> a core of throughput. For load tests, you generally would use multiple 
> channels to increase the number of connections. Sometimes that is needed in 
> practice in real clients, but more often each server has many clients and 
> each client has a separate channel/connection so clients don't need to use 
> multiple.
>
> For reference, do look at our public benchmark dashboard 
> <https://grafana-dot-grpc-testing.appspot.com/>. "Unary secure throughput 
> QPS (8 core client to 8 core server)" may be similar to what you are trying 
> to do here, and it gets 171 Kqps. See 
> https://grpc.io/docs/guides/benchmarking/ for a description of what the 
> tests are doing to put the numbers in perspective.
>
> On Sun, Feb 26, 2023 at 11:52 PM shobhit agarwal <[email protected]> 
> wrote:
>
>> Hi Eric,
>>
>> After adding threadpool size to 100, now I am getting throughput 1.6k 
>> req/sec,but same with rest (Get method) it's 2.7k req/sec.
>>
>> my client is jmeter and client and server is on same machine.
>>
>>  Do I need to set channelType also for better performance.
>>
>> Below is my code:
>>
>> @Bean
>>
>> GrpcServerConfigurer grpcServerConfigurer() {
>>
>> return builder -> {
>>
>> ((NettyServerBuilder) builder)
>>
>> .executor(Executors.newFixedThreadPool(100));
>>
>> };
>>
>> }
>>
>>
>> @GrpcService
>>
>> public class MyGrpcService extends SquareRpcGrpc.SquareRpcImplBase {
>>
>> @Override
>>
>> public void findSquareUnary(Input request, StreamObserver<Output> 
>> responseObserver) {
>>
>> int number = request.getNumber();
>>
>> responseObserver.onNext(
>>
>> Output.newBuilder().setNumber(number).setResult(number * number).build()
>>
>> );
>>
>> responseObserver.onCompleted();
>>
>> }
>>
>>
>> On Thursday, 23 February 2023 at 23:05:41 UTC+5:30 Eric Anderson wrote:
>>
>>> Oops. Failed to cc the mailing list.
>>>
>>> There was a reply to this saying the JVM was warm. I maintained that 
>>> without the actual benchmark code it is hard to diagnose.
>>>
>>> ---------- Forwarded message ---------
>>> From: Eric Anderson <[email protected]>
>>> Date: Tue, Feb 21, 2023 at 9:08 AM
>>> Subject: Re: [grpc-io] grpc vs rest benchmark
>>> To: shobhit agarwal <[email protected]>
>>>
>>>
>>> 100 ms is much too high. You have probably not warmed up the JVM. See 
>>> https://stackoverflow.com/a/47662646/4690866
>>>
>>> Without a reproducible benchmark and information on where you're running 
>>> it, this isn't very interesting/relevant. It's just far too easy to make a 
>>> useless benchmark and 200 qps is too far from expected numbers without an 
>>> easy reproduction and machine details. "Running on Windows" is an important 
>>> detail, but it's nowhere near enough information to understand what the 
>>> results mean.
>>>
>>> (Benchmarks are perfect measurements, but it is hard to figure out what 
>>> they are measuring.)
>>>
>>> On Thu, Feb 16, 2023 at 6:02 AM shobhit agarwal <[email protected]> 
>>> wrote:
>>>
>>>> Thanks for replying,
>>>> I am using the code from below article: 
>>>>
>>>> https://www.techgeeknext.com/spring-boot/spring-boot-grpc-example#google_vignette
>>>>
>>>> It's spring boot application.
>>>> Even for single request grpc is response time is 100 ms and REST is 
>>>> taking 12 ms only.
>>>>
>>> -- 
>>
> You received this message because you are subscribed to the Google Groups "
>> grpc.io" group.
>>
> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/1cc1f02b-722d-4b4e-babe-443da624632an%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/1cc1f02b-722d-4b4e-babe-443da624632an%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ab632b08-d672-42ea-bfb0-77e3b4e71887n%40googlegroups.com.

Reply via email to