Hi All;

My problem's solution is as below:

##############################################################################
# Buffer size
##############################################################################
CONFIG proxy.config.net.sock_recv_buffer_size_in INT 10485760
CONFIG proxy.config.net.sock_recv_buffer_size_out INT 10485760
CONFIG proxy.config.net.sock_send_buffer_size_in INT 10485760
CONFIG proxy.config.net.sock_send_buffer_size_out INT 10485760

Best Regards,
Ergin

On Thu, Oct 23, 2014 at 6:33 PM, Ergin Ozekes <ergin.oze...@gmail.com>
wrote:

> Hi all;
>
> In according to my investigation on the ats read
> performance,NetHandler::mainNetEvent  is the top function. On my system
> lots of buffer over run is exist. In my system net input interface has 250
> Mbps traffic but on the client output only 125 Mbps is exist. Half of the
> downloaded content is returning to client.
>
> Does anyone knows the reason of this case?
> And how can I fixe it?
>
> This is the "perf top" output:
>   4.82%  libpcre.so.3.13.1        [.] match
>
>
>   3.27%  [kernel]                 [k] native_write_msr_safe
>
>
>   2.09%  [kernel]                 [k] ipt_do_table
>
>
>   1.60%  [kernel]                 [k] __schedule
>
>
>   1.57%  [kernel]                 [k] menu_select
>
>
>   1.43%  libc-2.13.so             [.] 0x00000000000e92b0
>
>
>   1.35%  [kernel]                 [k] find_busiest_group
>
>
>   1.16%  traffic_server           [.] EThread::execute()
>
>
>   1.12%  [kernel]                 [k] copy_user_generic_string
>
>
>   1.10%  [kernel]                 [k] nf_iterate
>
>
>   1.04%  [kernel]                 [k] _raw_spin_lock_irqsave
>
>
>   1.03%  [kernel]                 [k] int_sqrt
>
>
>   *1.02%  traffic_server           [.] NetHandler::mainNetEvent(int,
> Event*)*
>
>   0.96%  [bnx2x]                  [k] bnx2x_rx_int
>
>
>   0.93%  [kernel]                 [k] _raw_spin_lock
>
>
>   0.90%  [kernel]                 [k] htable_selective_cleanup
>
>
>   0.86%  [kernel]                 [k] cpuidle_enter_state
>
>
>   0.83%  [kernel]                 [k] enqueue_task_fair
>
>
>   0.83%  [kernel]                 [k] tcp_packet
>
>
>   0.76%  [kernel]                 [k] apic_timer_interrupt
>
>
>   0.76%  [kernel]                 [k] timerqueue_add
>
>
>   0.76%  [kernel]                 [k] idle_cpu
>
>
>   0.71%  [bnx2x]                  [k] bnx2x_start_xmit
>
>
>   0.67%  [kernel]                 [k] rb_erase
>
>
>   *0.64%  traffic_server           [.] read_from_net(NetHandler*,
> UnixNetVConnection*, EThread*)   *
>
>
>
> On Thu, Oct 23, 2014 at 12:33 PM, Ergin Ozekes <ergin.oze...@gmail.com>
> wrote:
>
>> Hi all;
>>
>> My current working network bandwidth is 350 Mbps.
>> My netstat -st output prints out below value increasingly.
>>
>>     1555903887 packets collapsed in receive queue due to low socket buffer
>>     1555908175 packets collapsed in receive queue due to low socket buffer
>>     1555912925 packets collapsed in receive queue due to low socket buffer
>>     1555920054 packets collapsed in receive queue due to low socket buffer
>>     1555929162 packets collapsed in receive queue due to low socket buffer
>>     1555938162 packets collapsed in receive queue due to low socket buffer
>>     1555945682 packets collapsed in receive queue due to low socket buffer
>>     1555951783 packets collapsed in receive queue due to low socket buffer
>>     1555959318 packets collapsed in receive queue due to low socket buffer
>>     1555962474 packets collapsed in receive queue due to low socket buffer
>>     1555969574 packets collapsed in receive queue due to low socket buffer
>>
>> I've increased to socket buffer size, backlog queue size and memlock
>> value was set to unlimited for ats user. How can I fix it completely?
>> BTW, How can I increase read from socket(connection) performance of the
>> ATS?
>>
>>
>> Best Regards,
>> Ergin Ozekes
>>
>> On Thu, Oct 23, 2014 at 12:16 PM, Ergin Ozekes <ergin.oze...@gmail.com>
>> wrote:
>>
>>> Hi all;
>>>
>>> My current working network bandwidth is 350 Mbps.
>>> My netstat -st output prints out below value increasingly.
>>>
>>>     1555903887 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555908175 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555912925 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555920054 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555929162 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555938162 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555945682 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555951783 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555959318 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555962474 packets collapsed in receive queue due to low socket
>>> buffer
>>>     1555969574 packets collapsed in receive queue due to low socket
>>> buffer
>>>
>>> I've increased to socket buffer size, backlog queue size and memlock
>>> value was set to unlimited for ats user. How can I fix it completely?
>>>
>>> Best Regards,
>>> Ergin Ozekes
>>>
>>
>>
>

Reply via email to