Valgrind reports a memory leak in my web crawler:

==36126== 923,440 bytes in 485 blocks are possibly lost in loss record 56
of 56
==36126==    at 0x483DD99: calloc (in
/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==36126==    by 0x4896414: ??? (in
/usr/lib/x86_64-linux-gnu/libcurl.so.4.6.0)
==36126==    by 0x48AAA2E: ??? (in
/usr/lib/x86_64-linux-gnu/libcurl.so.4.6.0)
==36126==    by 0x48ABB1B: ??? (in
/usr/lib/x86_64-linux-gnu/libcurl.so.4.6.0)
==36126==    by 0x48ABCB7: curl_multi_socket_action (in
/usr/lib/x86_64-linux-gnu/libcurl.so.4.6.0)
==36126==    by 0x10C667: event_cb (crawler.c:188)
==36126==    by 0x10D78F: crawler_init (crawler.c:555)
==36126==    by 0x49E0608: start_thread (pthread_create.c:477)
==36126==    by 0x4B41102: clone (clone.S:95)

here is my event_cb function:

static void
event_cb(GlobalInfo *g, int fd, int revents)
{
        CURLMcode rc;
        struct itimerspec its;

        int action = ((revents & EPOLLIN) ? CURL_CSELECT_IN : 0) |
                                 ((revents & EPOLLOUT) ? CURL_CSELECT_OUT :
0);

        rc = curl_multi_socket_action(g->multi, fd, action,
&g->still_running);
        mcode_or_die("event_cb: curl_multi_socket_action", rc);

        check_multi_info(g);

        if (g->still_running <= 0) {
                //fprintf(MSG_OUT, "last transfer done, kill timeout\n");
                memset(&its, 0, sizeof(struct itimerspec));
                timerfd_settime(g->tfd, 0, &its, NULL);
        }
}

What is memory being allocated for? What do I need to free? Is this a bug
in curl or is something wrong at my end?

Thanks
James Read
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Reply via email to