Hi Kit,

Thanks for the suggestion, I don’t think we will reach that point at all, since 
in the forward proxy, the header check  will only function in the “CONNECT” 
method, after that TLS handshake and encrypted traffic will be blind for 
forward proxy.

it's sort of connection management instead of request management, and each of 
the connection should have the keep-alive enabled as well. 


Thanks,
Di Li




> On Dec 22, 2016, at 10:31 AM, Shu Kit Chan <chanshu...@gmail.com> wrote:
> 
> It is quite hard to judge since i think it is an apple-to-orange comparision.
> but if you are accessing a local redis, then luasocket should be ok.
> 
> Thanks.
> 
> On Thu, Dec 22, 2016 at 10:18 AM, Di Li <di...@apple.com> wrote:
>> Sincerely apology on this.
>> 
>> The bottleneck is on the webdis, which is a http front end of redis, since 
>> ts.fetch only support http request, and pretty redis doesn’t support http 
>> protocol
>> 
>> I changed the fetch url to something else, which is back to 12000 req/s.And 
>> I did check the internal request, otherwise nothing comes back.
>> 
>> so far, I still have better results with luasocket -> unix domain socket 
>> (19K/s) than the ts.fetch with http calls (12K/s ).
>> 
>> Based on my knowledge on this, the ts.fetch is non-blocking and luasocket is 
>> blocking, should I still go with ts.fetch even it has worse restul ?
>> 
>> 
>> Thanks,
>> Di Li
>> 
>> 
>> 
>> 
>>> On Dec 22, 2016, at 10:00 AM, Shu Kit Chan <chanshu...@gmail.com> wrote:
>>> 
>>> One thing I can think of is that you may be calling ts.fetch()
>>> unconditionally. We need to check if it is an internal request first.
>>> If it is, it is likely done to the ts.fetch() and therefore we should
>>> not do a ts.fetch() for an incoming internal request.
>>> 
>>> Otherwise it will result in a recursive situation and severely affect
>>> performance.
>>> 
>>> But if you have taken care of that already, pls share your script with
>>> you so I can take a closer look.
>>> 
>>> Thanks.
>>> 
>>> On Thu, Dec 22, 2016 at 9:48 AM, Shu Kit Chan <chanshu...@gmail.com> wrote:
>>>> Can you share your lua script in full with me?
>>>> 
>>>> Thanks.
>>>> 
>>>> Kit
>>>> 
>>>> On Thu, Dec 22, 2016 at 1:30 AM, Di Li <di...@apple.com> wrote:
>>>>> Hey Guys,
>>>>> 
>>>>> Running 6.2.0 with cherry pick of TS-4497 to make the ts.fetch work, 
>>>>> otherwise it won’t even function.
>>>>> 
>>>>> The performance came back with very terrible results, not sure the issue 
>>>>> is on the keep alive has been disabled for internal request or the 
>>>>> ts.fetch lock on itself
>>>>> 
>>>>> I have running the ts.fetch or luasocket with a service on 127.0.0.1, so 
>>>>> not really remote request
>>>>> 
>>>>> 
>>>>> without ts.fetch, I have 45156 req/s
>>>>> 
>>>>> [root@Di-Dev wrk]# ./wrk -c 50 -t 20 -d 10 -s 
>>>>> ./scripts/via_proxy_get1.lua http://10.12.17.57:8080
>>>>> Running 10s test @ http://10.12.17.57:8080
>>>>> 20 threads and 50 connections
>>>>> Thread Stats   Avg      Stdev     Max   +/- Stdev
>>>>>   Latency     1.16ms    3.55ms  87.34ms   99.06%
>>>>>   Req/Sec     2.27k   596.33     3.49k    67.84%
>>>>> 456054 requests in 10.10s, 374.47MB read
>>>>> Requests/sec:  45156.39
>>>>> Transfer/sec:     37.08MB
>>>>> 
>>>>> 
>>>>> with ts.fetch, I have roughly 80 req/s, simply add following into any 
>>>>> hook call back function
>>>>> 
>>>>> local res = ts.fetch('http://127.0.0.1/ping', {method = 'GET’}) and no 
>>>>> process any data from the res, purely just call it.
>>>>> 
>>>>> 
>>>>> 
>>>>> [root@Di-Dev wrk]# ./wrk -c 50 -t 20 -d 10 -s 
>>>>> ./scripts/via_proxy_get1.lua http://10.12.17.57:8080
>>>>> Running 10s test @ http://10.12.17.57:8080
>>>>> 20 threads and 50 connections
>>>>> Thread Stats   Avg      Stdev     Max   +/- Stdev
>>>>>   Latency   547.01ms   98.71ms   1.12s    84.97%
>>>>>   Req/Sec     4.46      3.06    20.00     77.60%
>>>>> 712 requests in 10.02s, 598.66KB read
>>>>> Requests/sec:     71.07
>>>>> Transfer/sec:     59.76KB
>>>>> 
>>>>> 
>>>>> 
>>>>> with luasocket, I have roughly 19000 req/s, still a huge drop on 
>>>>> performance, that’s the reason we tried the ts.fetch, but come back with 
>>>>> way much worse results.
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks,
>>>>> Di Li
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>> 

Reply via email to