Hi all,
   my env is: linux, openssl-0.9.8e, pthread(nptl, glibc 2.3.5)

     I have encounter a "memory leak or memory deallocation delay" problem in 
my multi-thread openssl application. I'm not sure which one should be. If 
anyone can help, it's very appreciated. Thank you!

     My multi-thread openssl appliction is a server who accept connections via 
SSL or normal socket. In order to detect the problem, I implement my 
multi-thread application in version thread pool and thread exit after services. 
At each round, I triger a test program to establish 1000 connections to the 
app. I found different phenomenon: if app handle connections via SSL, the RSS 
of my app will grow during every round(but, finnaly it will restrains to 60M ). 
If connection via normal socket, the RSS of my app won't grow(or grow 
small)after the second round. So, I think the memory occupation should be 
imported by openssl.

     I have noticed the three cases which already been discuessed about openssl 
thread program.
     1. add ERR_remove_status(0) after SSL_free() for openssl to clean thread 
local storage
     2. SSL_CTX_set_session_cache_mode(ctx, 
SSL_SESS_CACHE_SERVER|SSL_SESS_CACHE_NO_INTERNAL); to disable internal session 
cache
    3. set mutex callbacks mentioned in openssl thread(3)


     Valgrind JUST report small pieces of "reachable memory leak" in my app, so 
I think my app is ok.


    pthread stack storage will be reused . if we consider the max stack storage 
is allocated in the first round test, the memory occupation growth in the 
second round is caused by others, ssl is suspected. I write my code refer to 
openssl/apps/s_server.c. (I use SSL_xxx API)
    But, I don't think this is a memory leak problem. Because valgrind didn't 
report such blocks of memory leak. It seems ERR_remove_status(0) is not enough 
for multithread appliction of openssl. Could't anyone help me out? Or tell me 
the memory occupation belongs to nptl or openssl and how to free such blocks of 
memory right after connection is closed.

Thank you!



PS.

     cat /proc/app_pid/maps, it may help   (currently, RSS of app is 34132K)
08048000-08072000 r-xp 00000000 07:02 38         /home/dd/app
08072000-08074000 rwxp 0002a000 07:02 38         /ssl/dd/app
08074000-09edf000 rwxp 08074000 00:00 0          [heap] // 1E6B000=31148K    
RSS: 34132K         // most memory occupation locate in heap
36100000-36121000 rwxp 36100000 00:00 0           // 21000
36121000-36200000 ---p 36121000 00:00 0             // DF000
36300000-363fe000 rwxp 36300000 00:00 0            // FE000
363fe000-36400000 ---p 363fe000 00:00 0                //  2000
36400000-364fc000 rwxp 36400000 00:00 0             // FC000
364fc000-36500000 ---p 364fc000 00:00 0                //  4000
36500000-365f9000 rwxp 36500000 00:00 0            // F9000
365f9000-36600000 ---p 365f9000 00:00 0                //  7000
36600000-366fe000 rwxp 36600000 00:00 0            // FE000
366fe000-36700000 ---p 366fe000 00:00 0                //  2000
36700000-367fb000 rwxp 36700000 00:00 0            // FB000   ~= 1M
367fb000-36800000 ---p 367fb000 00:00 0                //  5000    = 20K
368fc000-368fd000 ---p 368fc000 00:00 0                  // Maybe signal_thread 
4K init stack
368fd000-370fc000 rwxp 368fd000 00:00 0               // maybe first thread in 
thread pool
370fc000-370fd000 ---p 370fc000 00:00 0                  // 4K init stack, 
following is 256 threads in thread pool
370fd000-378fc000 rwxp 370fd000 00:00 0 
378fc000-378fd000 ---p 378fc000 00:00 0 
378fd000-380fc000 rwxp 378fd000 00:00 0 
380fc000-380fd000 ---p 380fc000 00:00 0 
380fd000-388fc000 rwxp 380fd000 00:00 0 
388fc000-388fd000 ---p 388fc000 00:00 0 
388fd000-390fc000 rwxp 388fd000 00:00 0 
390fc000-390fd000 ---p 390fc000 00:00 0 
390fd000-398fc000 rwxp 390fd000 00:00 0 
398fc000-398fd000 ---p 398fc000 00:00 0 
// too may, skip




 


8・15 《大 话 西 游 3》动 人 公 测!点 击 此 处 抢 先 下 载 最 新 客 户 端 >> 

Reply via email to