Hi, everyone

I have one problem, I do not know whether this is a bug.

Description of problem:

The version of krb5 Iused is krb5-1.13.2-12.el7

I built a kerberos server, there are more than 20 principals in the database.

Because of some reason, I do executed "kadmin -q "listprincs"" ervery second.

I can see a lot of print in the logfile kadmin.log and krb5kdc.log.

kadmin.log
Jul 04 19:19:58 omu1.cloudedge.com kadmind[11234](info): closing down fd 31
Jul 04 19:20:04 omu1.cloudedge.com kadmind[11234](Notice): Request: kadm5_init, 
root/ad...@cloudedge.com, success, client=root/ad...@cloudedge.com, 
service=kadmin/omu1.cloudedge....@cloudedge.com, addr=172.16.0.15, vers=4, 
flavor=6
Jul 04 19:20:04 omu1.cloudedge.com kadmind[11234](Notice): Request: 
kadm5_get_principals, *, success, client=root/ad...@cloudedge.com, 
service=kadmin/omu1.cloudedge....@cloudedge.com, addr=172.16.0.15

krb5kdc.log
Jul 09 11:40:08 omu1.cloudedge.com krb5kdc[16397](info): AS_REQ (5 etypes {16 
23 18 3 1}) 172.16.0.1: ISSUE: authtime 1468064408, etypes {rep=16 tkt=23 
ses=16}, root/ad...@cloudedge.com for kadmin/omu1.cloudedge....@cloudedge.com
Jul 09 11:40:13 omu1.cloudedge.com krb5kdc[16397](info): AS_REQ (5 etypes {16 
23 18 3 1}) 172.16.0.1: ISSUE: authtime 1468064413, etypes {rep=16 tkt=23 
ses=16}, ftpvrpv...@cloudedge.com for krbtgt/cloudedge....@cloudedge.com
Jul 09 11:40:13 omu1.cloudedge.com krb5kdc[16397](info): AS_REQ (5 etypes {16 
23 18 3 1}) 172.16.0.1: ISSUE: authtime 1468064413, etypes {rep=16 tkt=23 
ses=16}, ftpvrpv...@cloudedge.com for krbtgt/cloudedge....@cloudedge.com
Jul 09 11:40:15 omu1.cloudedge.com krb5kdc[16397](info): AS_REQ (5 etypes {16 
23 18 3 1}) 172.16.0.1: ISSUE: authtime 1468064415, etypes {rep=16 tkt=23 
ses=16}, root/ad...@cloudedge.com for kadmin/omu1.cloudedge....@cloudedge.com

When I increase the frequency of command execution, It'll find that the logfile 
are up to several hundred megabytes or even Gigabit. And the physical Memory of 
kadmin also have several hundred megabytes or even Gigabit.

I also use valgrind to search the leak point. there are the strange stacks.

==3326== 1,832,251 (48 direct, 1,832,203 indirect) bytes in 3 blocks are 
definitely lost in loss record 276 of 279
==3326==    at 0x4C2B974: calloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==3326==    by 0x527F4DE: gssrpc__svcauth_gss (svc_auth_gss.c:414)
==3326==    by 0x527EB69: svc_do_xprt (svc.c:487)
==3326==    by 0x527EB69: gssrpc_svc_getreqset (svc.c:430)
==3326==    by 0x116EF0: process_rpc_connection (net-server.c:1899)
==3326==    by 0x684ECD7: verto_fire (in /usr/lib64/libverto.so.1.0.0)
==3326==    by 0xE41946A: ??? (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0xE417936: ??? (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0xE4140FC: _tevent_loop_once (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0x684E4AE: verto_run (in /usr/lib64/libverto.so.1.0.0)
==3326==    by 0x10D8D6: main (ovsec_kadmd.c:534)
==3326==
==3326== 10,090,760 (196,896 direct, 9,893,864 indirect) bytes in 6,153 blocks 
are definitely lost in loss record 279 of 279
==3326==    at 0x4C2B974: calloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==3326==    by 0x549EF0F: gss_acquire_cred_from (g_acquire_cred.c:170)
==3326==    by 0x549F1E3: gss_acquire_cred (g_acquire_cred.c:107)
==3326==    by 0x527F6AE: svcauth_gss_acquire_cred (svc_auth_gss.c:148)
==3326==    by 0x527F6AE: gssrpc__svcauth_gss (svc_auth_gss.c:497)
==3326==    by 0x527EB69: svc_do_xprt (svc.c:487)
==3326==    by 0x527EB69: gssrpc_svc_getreqset (svc.c:430)
==3326==    by 0x116EF0: process_rpc_connection (net-server.c:1899)
==3326==    by 0x684ECD7: verto_fire (in /usr/lib64/libverto.so.1.0.0)
==3326==    by 0xE41946A: ??? (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0xE417936: ??? (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0xE4140FC: _tevent_loop_once (in /usr/lib64/libtevent.so.0.9.25)
==3326==    by 0x684E4AE: verto_run (in /usr/lib64/libverto.so.1.0.0)
==3326==    by 0x10D8D6: main (ovsec_kadmd.c:534)
==3326==
==3326== LEAK SUMMARY:
==3326==    definitely lost: 196,972 bytes in 6,157 blocks
==3326==    indirectly lost: 11,726,067 bytes in 237,707 blocks
==3326==      possibly lost: 4,904 bytes in 13 blocks
==3326==    still reachable: 254,923 bytes in 5,728 blocks
==3326==         suppressed: 0 bytes in 0 blocks

I fetch the lastest source code of krb5 in github.com, build and make install. 
Still have this problem.

I do not understand why a lot of kadmin -q "listprincs" will lead  mem become 
larger?  I want to know how much is the maximum size of it and what factors 
related, and is it normal or memleak.

I was tortured for a long time this issue, Which big God can save me.

Thanks.

________________________________________________
Kerberos mailing list           Kerberos@mit.edu
https://mailman.mit.edu/mailman/listinfo/kerberos

Reply via email to