Hi.

I have been testing (again) MHD and Nginx with a static content and noticed
MHD is around two times slower than Nginx, but I'm not sure if the slowness
is related to MHD or something I'm doing wrong.

*Test built with MHD*

#include <stdio.h>
#include <memory.h>
#include <microhttpd.h>

#define PAGE
    \
  "<!DOCTYPE html>\n\
<html lang=\"en\">\n\
  <head>\n\
    <meta charset=\"UTF-8\" />\n\
    <meta name=\"viewport\" content=\"width=device-width,
initial-scale=1.0\" />\n\
    <title>Hello world benchmark</title>\n\
  </head>\n\
  <body>\n\
    This is a static content to check the performance of the following
HTTP\n\
    servers:\n\
    <ul>\n\
      <li>MHD</li>\n\
      <li>nginx</li>\n\
    </ul>\n\
  </body>\n\
</html>"

static enum MHD_Result ahc_echo(void *cls, struct MHD_Connection *con,
                                const char *url, const char *method,
                                const char *version, const char
*upload_data,
                                size_t *upload_data_size, void **ptr) {
  struct MHD_Response *res;
  enum MHD_Result ret;
  if ((void *)1 != *ptr) {
    *ptr = (void *)1;
    return MHD_YES;
  }
  *ptr = NULL;
  res = MHD_create_response_from_buffer(strlen(PAGE), PAGE,
                                        MHD_RESPMEM_PERSISTENT);
  ret = MHD_queue_response(con, MHD_HTTP_OK, res);
  MHD_destroy_response(res);
  return ret;
}

int main() {
  struct MHD_Daemon *d;
  d = MHD_start_daemon(
      MHD_USE_EPOLL_INTERNAL_THREAD | MHD_SUPPRESS_DATE_NO_CLOCK |
          MHD_USE_EPOLL_TURBO,
      8080, NULL, NULL, &ahc_echo, NULL, MHD_OPTION_CONNECTION_TIMEOUT,
      (unsigned int)120, MHD_OPTION_THREAD_POOL_SIZE,
      (unsigned int)sysconf(_SC_NPROCESSORS_ONLN),
MHD_OPTION_CONNECTION_LIMIT,
      (unsigned int)10000, MHD_OPTION_END);
  getchar();
  MHD_stop_daemon(d);
  return 0;
}

*Nginx stuff*

Version: 1.18.0

Config:

$ cat /etc/nginx/nginx.conf

worker_processes auto;
worker_cpu_affinity auto;
events {
    worker_connections  10000;
}
http {
    access_log off;
    keepalive_timeout 65;
    server {
        listen 8080 default_server;
...

$ cat /usr/share/nginx/html/index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Hello world benchmark</title>
  </head>
  <body>
    This is a static content to check the performance of the following HTTP
    servers:
    <ul>
      <li>MHD</li>
      <li>nginx</li>
    </ul>
  </body>
</html>

*Environment*

$ lscpu

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   43 bits physical, 48 bits virtual
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              2
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
CPU family:                      23
Model:                           24
Model name:                      AMD Ryzen 7 3700U with Radeon Vega Mobile
Gfx
Stepping:                        1
Frequency boost:                 enabled
CPU MHz:                         2100.611
CPU max MHz:                     2300.0000
CPU min MHz:                     1400.0000
BogoMIPS:                        4591.70
Virtualization:                  AMD-V
L1d cache:                       128 KiB
L1i cache:                       256 KiB
L2 cache:                        2 MiB
L3 cache:                        4 MiB
NUMA node0 CPU(s):               0-7

$ cat /proc/version

Linux version 5.6.14-300.fc32.x86_64 (
mockbu...@bkernel03.phx2.fedoraproject.org) (gcc version 10.1.1 20200507
(Red Hat 10.1.1-1) (GCC)) #1 SMP Wed May 20 20:47:32 UTC 2020

*Finally, the tests using wrk!*

*wrk* results (avg, after three intervaled tests) for Nginx:

Running 10s test @ http://127.0.0.1:8080/
  10 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.71ms    3.61ms  72.10ms   74.14%
    Req/Sec    12.37k     1.46k   26.09k    82.34%
  Latency Distribution
     50%    6.96ms
     75%    9.29ms
     90%   12.65ms
     99%   18.26ms
  1228942 requests in 10.09s, 717.21MB read
Requests/sec: 121831.57
Transfer/sec:     71.10MB

*wrk* results (avg) for MHD

Running 10s test @ http://127.0.0.1:8080/
  10 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.18ms    7.47ms  59.49ms   71.56%
    Req/Sec     7.50k     1.54k   16.48k    68.81%
  Latency Distribution
     50%   10.72ms
     75%   16.23ms
     90%   22.37ms
     99%   35.05ms
  745561 requests in 10.09s, 310.72MB read
Requests/sec:  73908.50
Transfer/sec:     30.80MB

Those tests were done in localhost just to illustration, but I did the same
tests at an external server remotely (with some limits due to internet
bandwidth) providing a larger content (around 150 kB) and got slower
results with MHD.

I don't know if there is any specific configuration to increase the MHD
speed again. BTW I would appreciate it and retest to get better results.

TIA for any help! 👍

--
Silvio Clécio

Reply via email to