On 10/01/2013 12:54 PM, Peter Dunkley wrote:
Done.
The patch is in master and the 4.0 branch.
Thanks,
Peter
Thanks for applying the patch. It looks like websockets in master are
stable now. No memory leaks are detected for the last two days.
pks.stats still shows some strange numbers, but th
Done.
The patch is in master and the 4.0 branch.
Thanks,
Peter
On 1 October 2013 08:22, Vitaliy Aleksandrov wrote:
> Hello,
>
> Thank you for the explanation.
> Could somebody review the patch in the attachment ? I tried to fix the
> problem with a growing tcpconn->refcnt for websocket conn
Hello,
Thank you for the explanation.
Could somebody review the patch in the attachment ? I tried to fix the
problem with a growing tcpconn->refcnt for websocket connections.
On 30 September 2013 17:14, Vitaliy Aleksandrov
mailto:vitalik.v...@gmail.com>> wrote:
Could you please share
On 30 September 2013 17:14, Vitaliy Aleksandrov wrote:
>
> Could you please share why nathelper aggregates both WS and WSS transports
> to "ws" and then msg_translator have to detect the type of a connection to
> a destination to build correct via ?
>
> modules/nathelper/nathelper.c create_rcv_uri
I found one place where tcpconn_put() never called after tcpconn_get():
--- a/msg_translator.c
+++ b/msg_translator.c
@@ -2509,9 +2509,11 @@ char* via_builder( unsigned int *len,
} else if (con->rcv.proto==PROTO_WSS) {
memcpy(line_buf+MY_VIA_LEN-4, "WSS ", 4
Yes, I found you commit. That's why now I'm using latest master.
tcl_list shows 200+ tcp connections and only a few of them have
ref_count bigger that 1. netstat shows the same number of established
connections.
If lost tcp_conn structures are not shown in tcp_list how can I check if
it is my
Hi,
I fixed some memory leaks in master on 4th July.
The main leak I was investigating was the tcp connection structures used
by the websocket module. When the connection is used, the ref count is
increased, and should be decreased when each packet/transaction etc has
completed. Each connectio
At first thanks for trying to help.
It's my fault that I messed up "top" to this story, just wanted to show
that while my system is working just fine:
1. "used" and "real_used" fields of a process (tcp receiver) is bigger
that I set in -M
2. "free" hasn't changed from the last restart.
root@p
Hello,
the output of top is not relevant, because kamailio uses an internal
memory manager. If system memory is increasing, then it is likely to be
from an external library.
I saw there was work on websocket, not being the developer I don't know
if it something to be backported. Maybe Peter
I switched to the latest master branch and it seems it works better, but
unfortunately I can't understand how much PKG memory kamailio really
uses to know it still has problems with PKG.
For instance "kamcmd pkg.stats" always shows that tcp_main process has
free: 32627984 (started with -M 32),
Additional info:
- kamailio-4.0.2 started with -m 128 -M 16
- Process:: ID=36 PID=2599 Type=tcp main process
- this server works as a websocket(ws,wss) to udp/tcp gateway.
PKG:
Also top output showed that RES column of tcp workers was constantly
growing.
SHM:
After some time kamailio stoped t
Hello,
can you get the type of the process with 'kamctl ps'?
Cheers,
Daniel
On 9/20/13 6:51 PM, Vitaliy Aleksandrov wrote:
Didn't check master branch before writing previous email. There were
some commits about memory leaks in websocket module. Will try master.
__
Hello,
I have one installation with a strange pkg.stats output:
root@host:~# kamcmd pkg.stats index 36
{
entry: 36
pid: 2599
rank: -4
used: 16234288
free: 15788240
real_used: 16900864
}
After some time I've checked it again and found that used and
Didn't check master branch before writing previous email. There were
some commits about memory leaks in websocket module. Will try master.
___
SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list
sr-users@lists.sip-router.org
http://
14 matches
Mail list logo