Bumped grpc-netty from 1.5.0 to 1.6.1, which bumped netty from 4.1.12.Final 
to 4.1.14. Final. Fixed the problem for me.

For the curious, it seemed like a bug in KeepAliveManager, sending, in 
certain situations, 2 pings at a time instead of 1. Luckily, the netty team 
already fixed it.

Keep up the good work!
Cristian

On Friday, September 22, 2017 at 8:55:28 AM UTC-7, [email protected] 
wrote:
>
> What does "increase the limit on the server side" mean? I configured a 
> client and server as:
>
> private val channel = NettyChannelBuilder
>     .forAddress("localhost", StrawmanServer.Port)
>     .usePlaintext(true)
>     .keepAliveTime(10, TimeUnit.SECONDS)
>     .keepAliveTimeout(5, TimeUnit.SECONDS)
>     .keepAliveWithoutCalls(false)
>     .build
>
> val server = NettyServerBuilder
>     .forPort(StrawmanServer.Port)
>     .executor(pool)
>     .permitKeepAliveTime(10, TimeUnit.SECONDS)
>     .permitKeepAliveWithoutCalls(true)
>
> I get:
>
> RESOURCE_EXHAUSTED
> Sep 22, 2017 8:52:32 AM io.grpc.netty.NettyClientHandler$1 onGoAwayReceived
> Bandwidth exhausted
> WARNING: Received GOAWAY with ENHANCE_YOUR_CALM. Debug data: {1}
> HTTP/2 error code: ENHANCE_YOUR_CALM
> Sep 22, 2017 8:52:32 AM io.grpc.internal.AtomicBackoff$State backoff
> Received Goaway
> WARNING: Increased keepalive time nanos to 20,000,000,000
> too_many_pings
>
> What is the proper way to configure a server to permit clients with 
> aggressive keepalive [10s]?
>
> Thank you,
> Cristian
>
> On Monday, June 5, 2017 at 4:17:30 PM UTC-7, Eric Anderson wrote:
>>
>> In 1.3 we started allowing clients to be more aggressive. From the 1.3 
>> release notes:
>>
>> "Keepalives in Netty and OkHttp now allow sending pings without 
>> outstanding RPCs. The minimum keepalive time was also reduced from 1 minute 
>> to 10 seconds. Clients must get permission from the services they use 
>> before enabling keepalive."
>>
>> However, that puts servers in danger, so we also added server-side 
>> detection of over-zealous clients. In the release notes:
>>
>> "Netty server: now detects overly aggressive keepalives from clients, 
>> with configurable limits. Defaults to permitting keepalives every 5 minutes 
>> only while there are outstanding RPCs, but clients must not depend on this 
>> value."
>>
>> too_many_pings is the server saying the client is pinging too frequently. 
>> Either reduce the keepalive rate on the client-side or increase the limit 
>> on the server-side.
>>
>> On Mon, Jun 5, 2017 at 2:51 PM, <[email protected]> wrote:
>>
>>>
>>> Hi
>>>
>>> I set on a grpc stream  with NettyChannel options:
>>> keepAliveTime"(60L, TimeUnit.SECONDS});
>>> keepAliveTimeout{8L, TimeUnit.SECONDS});
>>>
>>> At times in the code I've added sleep for 15 min.  I see on Wireshark 
>>> the keep alives.
>>>
>>> But after a time I see :
>>>
>>> -----------------------------------------------------------------------------------------------------------------------------------------
>>> Jun 05, 2017 5:29:53 PM io.grpc.netty.NettyClientHandler$1 
>>> onGoAwayReceived
>>> WARNING: Received GOAWAY with ENHANCE_YOUR_CALM. Debug data: {1}
>>> Jun 05, 2017 5:29:53 PM io.grpc.internal.AtomicBackoff$State backoff
>>> WARNING: Increased keepalive time nanos to 240,000,000,000
>>> 2017-06-05 21:29:53,073 ERROR OrdererClient:116 - Received error on 
>>> channel foo, orderer orderer.example.com, url grpc://localhost:7050, 
>>> RESOURCE_EXHAUSTED: Bandwidth exhausted
>>> HTTP/2 error code: ENHANCE_YOUR_CALM
>>> Received Goaway
>>> too_many_pings
>>> io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Bandwidth exhausted
>>> HTTP/2 error code: ENHANCE_YOUR_CALM
>>> Received Goaway
>>> too_many_pings
>>>         at io.grpc.Status.asRuntimeException(Status.java:540)
>>>         at 
>>> io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:392)
>>>         at 
>>> io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:426)
>>>         at 
>>> io.grpc.internal.ClientCallImpl.access$100(ClientCallImpl.java:76)
>>>         at 
>>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:512)
>>>         at 
>>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:429)
>>>         at 
>>> io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:544)
>>>         at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52)
>>>         at 
>>> io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:117)
>>>         at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>         at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>> -----------------------------------------------------------------------------------------------------------------------------------------
>>>
>>> But everything seem to continue to work  grpc 1.3.0
>>> Something I can do to stop that ? Any ideas whats going on here ? 
>>> Thanks!
>>>
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at https://groups.google.com/group/grpc-io.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/8fbd88f7-8648-471b-baab-5e2711c32fce%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/grpc-io/8fbd88f7-8648-471b-baab-5e2711c32fce%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/7da28965-60d3-44d0-9a97-bde53c0c10ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to