Hi Noah- The entire output of the run looked like this:

21:46:52.966 - main() starting
21:46:52.969 -   Channel Up- Testing Stub
21:46:52.969 -     Making RPC
21:46:53.071 -     Leaving testStub()
21:46:53.073 -   Leaving test()
D0530 21:46:54.078000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:46:55.083000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:46:56.086000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:46:57.091000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:46:58.095000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:46:59.099000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:47:00.102000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:47:01.104000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:47:02.107000000 17504 iomgr.c:101] Waiting for 1 iomgr objects to 
be destroyed
D0530 21:47:03.110000000 17504 iomgr.c:120] Failed to free 1 iomgr objects 
before shutdown deadline: memory leaks are likely
D0530 21:47:03.111000000 17504 iomgr.c:82] LEAKED OBJECT: client:socket=
0x000002699AA021C0 000002699AA02298
21:47:03.110 - main() Exiting
Press any key to continue . . .



Thanks a lot for looking into this! If there's anything else I can provide, 
let me know.

Cheers,
--mike




On Tuesday, May 30, 2017 at 12:43:08 PM UTC-7, Noah Eisen wrote:
>
> So the reason that ten second deadline exist is to allow for garbage 
> collectors (for the wrapped languages) to do some needed clean up. It does 
> not make must sense to shorten that or make it configurable. Since it 
> happens at shutdown time, it is ok to take a little extra time.
>
> There is an environment variable, GRPC_ABORT_ON_LEAKS, which, if set, will 
> cause grpc_shutdown to crash if it sees leaks like this particular 
> situation. We might add a new environment variable that will ignore the 
> leak and not wait like it does now (GRPC_IGNORE_LEAKS or GRPC_YOLO or 
> something).
>
> So all that is about the behavior that occurs when a leak happens. But 
> even better would be the leak not happening. Would you attach more of the 
> log please? GRPC_VERBOSITY=DEBUG should be fine. We want to see exactly 
> what object is getting leaked. Also, any info about where you are running 
> this would be great so we can reproduce it and trace it down.
>
> On Tue, May 30, 2017 at 8:45 AM, Mike Oberberger <[email protected] 
> <javascript:>> wrote:
>
>> Hi Noah-
>>
>> The GRPC_VERBOSITY=DEBUG produced this:
>>
>> *D0530 08:45:57.048000000 17740 iomgr.c:101] Waiting for 1 iomgr objects 
>> to be destroyed*
>>
>> ten times, one per second. So it would seem that your hunch is correct, 
>> but I really don't know what to do now. The GRPC_TRACE=all didn't produce 
>> any output, by the way. 
>>
>> Thank you!
>>
>>
>> On Tuesday, May 30, 2017 at 7:43:08 AM UTC-7, Noah Eisen wrote:
>>>
>>> Could you turn on debugging and attach the output? I have a hunch that 
>>> this comes from grpc_iomgr_shutdown. It will wait ten seconds to try to 
>>> free all iomgr object before giving up and leaking memory.
>>>
>>> export GRPC_VERBOSITY=DEBUG for minimal debugging. That should be 
>>> enough, but for even more trace you can do export GRPC_TRACE=all
>>>
>>> On Mon, May 29, 2017 at 12:12 PM, Veldaeven <[email protected]> wrote:
>>>
>>>> The purpose of the code was initially to test the operation of the grpc 
>>>> stub when the grpc service was not available. However, the behavior I'm 
>>>> seeing indicates that there's something going on that I don't understand- 
>>>> hence the question.
>>>>
>>>> In this code:
>>>>
>>>>     #define IN_MILLISECONDS(x) (std::chrono::system_clock::now() + std
>>>> ::chrono::milliseconds(x))
>>>>     
>>>>     string NowString()
>>>>     {
>>>>         char buf[128];
>>>>         SYSTEMTIME timeBuf;
>>>>         ::GetLocalTime(&timeBuf);
>>>>         sprintf(buf, "%02d:%02d:%02d.%03d - ", timeBuf.wHour, timeBuf.
>>>> wMinute, timeBuf.wSecond, timeBuf.wMilliseconds);
>>>>         return string(buf);
>>>>     }
>>>>     
>>>>     void testStub(std::shared_ptr<grpc::Channel> chan)
>>>>     {
>>>>         MessageProcessor::Stub client(chan);
>>>>     
>>>>         Void _void;
>>>>         AccumulateAmount amount;
>>>>         amount.set_amount(42);
>>>>     
>>>>         grpc::ClientContext ctx;
>>>>         ctx.set_deadline(IN_MILLISECONDS(100));
>>>>     
>>>>         cout << NowString() << "    Making RPC\n";
>>>>         grpc::Status st = client.Accumulate(&ctx, amount, &_void);
>>>>         cout << NowString() << "    Leaving testStub()\n";
>>>>     }
>>>>     
>>>>     void test()
>>>>     {
>>>>         auto chan = grpc::CreateChannel("localhost:54321", grpc::
>>>> InsecureChannelCredentials());
>>>>     
>>>>         cout << NowString() << "  Channel Up- Testing Stub\n";
>>>>         testStub(chan);
>>>>         cout << NowString() << "  Leaving test()\n";
>>>>     }
>>>>
>>>>
>>>>     int main()
>>>>     {
>>>>         cout << NowString() << "Calling test()\n";
>>>>         test();
>>>>         cout << NowString() << "Exiting 'main'\n";
>>>>         return 1;
>>>>     }
>>>>
>>>>
>>>>
>>>> the output is
>>>>
>>>>     11:42:05.400 - Calling test()
>>>>     11:42:05.403 -   Channel Up- Testing Stub
>>>>     11:42:05.404 -     Making RPC
>>>>     11:42:05.506 -     Leaving testStub()
>>>>     11:42:05.507 -   Leaving test()
>>>>     11:42:15.545 - Exiting 'main'
>>>>     Press any key to continue . . .
>>>>
>>>>
>>>> It should be evident by the timestamps that the destructor for the 
>>>> Channel is taking just over 10 seconds.
>>>>
>>>> My question is this: *What can I do to significantly reduce the time it 
>>>> takes to destroy the grpc Channel?*
>>>>
>>>> My guess is that the channel is still trying to connect to the 
>>>> non-existent server/service. My guess is that the destructor for the 
>>>> channel isn't cancelling that connection request.
>>>>
>>>> Perhaps there's an argument/option to send to the CreateChannel call? 
>>>> I've looked at http://www.grpc.io/grpc/cpp/group__grpc__arg__keys.html, 
>>>> but I didn't see anything that looked like it would affect the initial 
>>>> connection attempt (the GRPC_ARG_INITIAL_RECONNECT_BACKOFF_MS 
>>>> <http://www.grpc.io/grpc/cpp/group__grpc__arg__keys.html#gaedb26c5ca7d3d279b81d12d9bf6dabc6>
>>>>  seemed 
>>>> to affect the re-connection between first and second attempts)
>>>>
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "grpc.io" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/grpc-io.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/grpc-io/9c7f31b4-ebf0-41e1-958f-e76e661c3471%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/grpc-io/9c7f31b4-ebf0-41e1-958f-e76e661c3471%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/9ab503b5-83d8-471e-b04d-e69a7f864b2d%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/9ab503b5-83d8-471e-b04d-e69a7f864b2d%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/01ab0533-7ccd-4760-93af-43f982c2111f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to