On 23/11/2020 20:16, James Read via curl-library wrote:
> I have attempted to make two minimal codes that
> demonstrate my problem.
>
> The first can be
> downloaded from https://github.com/JamesRead5737/fast
> <https://github.com/JamesRead5737/fast>
> It basically recursively downloads http://www.google.com
> <http://www.google.com>, http://www.yahoo.com
> <http://www.yahoo.com> and http://www.bing.com
> <http://www.bing.com>
> I am able to achieve download speeds of up to 7Gbps with
> this simple program
>
> The second can be downloaded
> from https://github.com/JamesRead5737/slow
> <https://github.com/JamesRead5737/slow>
> The program extends the first program with an asynchronous
> DNS component and instead of recursively downloading the
> same URLs over and over again downloads from a list of
> URLs provided in the http001 file. Full instructions are
> in the README. What's troubling me is that this second
> version of the program only achieves average download
> speed of 16Mbps.
>
> I have no idea why this is happening. Shouldn't the second
> program run just as fast as the first?
>
> Any ideas what I'm doing wrong?

That's a lot of code you're asking us to debug.

Have you profiled it? Have you tried narrowing down the
problem to a smaller testcase? I find it hard to believe
that these are minimal.

Also, there is no recursion here.

Cheers

-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html

Reply via email to