Hi Daniel,
Here is the output of kamctl stats,
BEFORE:
=
core:bad_URIs_rcvd = 0
core:bad_msg_hdr = 0
core:drop_replies = 0
core:drop_requests = 0
core:err_replies = 0
core:err_requests = 0
core:fwd_replies = 0
core:fwd_requests = 0
core:rcv_replies = 3
core:rcv_requests = 5
core:unsupported_me
Hello,
fragments are less, then it might be other structures populated by other
modules. Can you share the full output of 'kamctl stats' before and
after the tests?
Cheers,
Daniel
On 26/01/2017 19:41, Andy wrote:
> Thanks Daniel for the quick response,
> here is before/after I see real_used_siz
Thanks Daniel for the quick response,
here is before/after I see real_used_size increased by 880k
before:
shmem:fragments = 26346
shmem:free_size = 8572547448
shmem:max_used_size = 19229728
shmem:real_used_size = 17387144
shmem:total_size = 8589934592
shmem:used_size = 11844368
after:
shmem:fragm
Hello,
there can be some overhead taken away by memory fragments.
Can you give the output of:
kamctl stats shmem
before and after the test?
Cheers,
Daniel
On 26/01/2017 05:37, Andy wrote:
> Hi,
>
> the tls total memory before and after a tls connection is cleaned up does
> not match,
>
> duri
Hi,
the tls total memory before and after a tls connection is cleaned up does
not match,
during test there were 10 tls connection created for which I see tls_h_close
and tls_h_tcpconn_clean
were called exactly 10 times for the same connection ids,
bit before and after the tls shm memory does not