Hey Guys, Running 6.2.0 with cherry pick of TS-4497 to make the ts.fetch work, otherwise it won’t even function.
The performance came back with very terrible results, not sure the issue is on the keep alive has been disabled for internal request or the ts.fetch lock on itself I have running the ts.fetch or luasocket with a service on 127.0.0.1, so not really remote request without ts.fetch, I have 45156 req/s [root@Di-Dev wrk]# ./wrk -c 50 -t 20 -d 10 -s ./scripts/via_proxy_get1.lua http://10.12.17.57:8080 Running 10s test @ http://10.12.17.57:8080 20 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.16ms 3.55ms 87.34ms 99.06% Req/Sec 2.27k 596.33 3.49k 67.84% 456054 requests in 10.10s, 374.47MB read Requests/sec: 45156.39 Transfer/sec: 37.08MB with ts.fetch, I have roughly 80 req/s, simply add following into any hook call back function local res = ts.fetch('http://127.0.0.1/ping', {method = 'GET’}) and no process any data from the res, purely just call it. [root@Di-Dev wrk]# ./wrk -c 50 -t 20 -d 10 -s ./scripts/via_proxy_get1.lua http://10.12.17.57:8080 Running 10s test @ http://10.12.17.57:8080 20 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 547.01ms 98.71ms 1.12s 84.97% Req/Sec 4.46 3.06 20.00 77.60% 712 requests in 10.02s, 598.66KB read Requests/sec: 71.07 Transfer/sec: 59.76KB with luasocket, I have roughly 19000 req/s, still a huge drop on performance, that’s the reason we tried the ts.fetch, but come back with way much worse results. Thanks, Di Li