Hello, Konstantin!

воскресенье, 19 марта 2017 г., 14:19:36 UTC+3 пользователь Konstantin 
Shaposhnikov написал:
>
> Hi,
>
> External measurements probably show more acurate picture.
>

Of course!
 

>
> First of all internal latency numbers only include time spent doing actual 
> work but don't include HTTP parsing (by net/http) and network overhead.
>
 
Yep, I'm absolutely agree with you, but I'm don't use net/http, I'm using 
fasthttp (jfyi). I don't believe that HTTP parsing may occur more then 
microseconds, network overhead on local machine insignificantly small!
 

> Secondly latency measured internally always looks better because it 
> doesn't include application stalls that happened outside of the measured 
> code.
>

Agree!
 

> Imagine that it takes 10ms for net/http to parse the request (e.g. due to 
> STW pause). and 1ms to run the handler. The real request latency is 11ms in 
> this case by if measured internally it is only 1ms. This is known as 
> coordinated omission. 
>

As I said earlier, I don't believe that http parsing and other "run-time" 
stuff may takes 10ms, it's unacceptable for that! By example, let's 
suppose, this situation takes place, why I don't see similar spikes in both 
graphs (nginx latency, myapp latency), but with different time order? Here 
part of my nginx log sampling:

# cat access.log-20170318 | grep "17/Mar/2017:03:42:17" | awk '{ print 
> $15,$16 }' | sort | uniq -c
>    2056 0.000 0.000
>     200 0.001 0.000
>    1313 0.001 0.001
>       3 0.002 0.001
>       9 0.002 0.002
>       5 0.003 0.003
>       3 0.004 0.004
>       4 0.005 0.005
>       5 0.006 0.006
>       4 0.007 0.007
>       2 0.008 0.007
>       5 0.008 0.008
>       1 0.009 0.009


As you can see, your hypothesis is not true, more then 99 percent of 
requests is really fast and occur less the 1 millisecond! And I try to find 
our what happens in this 1 percent!
 

> I recommend to watch this video for lots of useful information about 
> latency measurement: https://www.youtube.com/watch?v=lJ8ydIuPFeU
>

I'm start watching this video, thanks. One thing that I want to share, I'm 
agree that measure the latency only inside my handler function is not 
right, and the main question - how can I measure the latency in other parts 
of my application? This is main question in this topic!
 

>
>
> Konstantin
>
> On Saturday, 18 March 2017 19:52:21 UTC, Alexander Petrovsky wrote:
>>
>> Hello!
>>
>> Colleagues, I need your help!
>>
>> And so, I have the application, that accept through http (fasthttp) 
>> dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
>> after that some fields reads into struct, then using this struct I make 
>> some calculations, and then struct fields writes into 
>> map[string]interface{}, this map writes to kafka (asynchronous), and 
>> finally the result reply to client through http. Also, I have 2 caches, one 
>> contains 100 millions and second 20 millions items, this caches build using 
>> freecache to avoid slooooow GC pauses. Incoming rate is 4k rps per server 
>> (5 servers at all), total cpu utilisation about 15% per server.
>>
>> The problem — my latency measurements show me that inside application 
>> latency significantly less then outside.
>> 1. How I measure latency?
>>     - I've add timings into http function handlers, and after that make 
>> graphs.
>> 2. How I understood that latency inside application significantly less 
>> then outside?
>>     - I'm installed in front of my application the nginx server and log 
>> $request_time, $upstream_response_time, after that make graphs too.
>>
>> It graphs show me that inside application latency is about 500 
>> microseconds in 99 percentile, and about 10-15 milliseconds outside 
>> (nginx). The nginx and my app works on the same server. My graphs show me 
>> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>>
>>
>> <https://lh3.googleusercontent.com/-HOZJ9iwMyyw/WM2POBUU1MI/AAAAAAAABV8/jhIV1f_PBxwPbs7fSmbqg5WJfKhB-CONgCLcB/s1600/1.png>
>>
>>
>> <https://lh3.googleusercontent.com/-Z-3-RgNcpN0/WM2PSCKXebI/AAAAAAAABWA/u-QhZs2YfzwzP6DHzu_7cT2toU-px-azACLcB/s1600/2.png>
>>
>>
>> Could someone help me find the problem and profile my application?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to