On Wed, Jun 2, 2010 at 6:52 PM, Lennart Sorensen <
[email protected]> wrote:

> On Wed, Jun 02, 2010 at 06:43:28PM +0200, Jonatan Soto wrote:
> > Well, that's what I did and that's why I decided to consult here because
> I
> > thought it might be a problem since the memory usage shown in top doesn't
> > match the amount of memory consumption of each process.
> >
> > The following corresponds to Server3, I understand that these processes
> > cannot consume 2GB of physical memory....
> >
> > 12027 root      20   0 89732 3764 2988 R    0  0.1   0:00.30
> > sshd
> >
> > 22502 root      20   0  163m 2752 1560 S    0  0.1   0:42.68
> > nscd
> >
> >  3608 root      20   0 77784 2164 1668 S    0  0.1   0:00.66
> > login
> >
> >  3009 root      20   0  119m 2080 1004 S    0  0.1   0:20.07
> > rsyslogd
> >
> > 28878 root      20   0 10576 1768 1312 S    0  0.1   0:00.36
> > bash
> >
> > 12030 root      20   0 10528 1676 1264 S    0  0.1   0:00.00
> > bash
> >
> >  3280 root      20   0 23896 1436 1132 S    0  0.0   7:36.22
> > vmware-guestd
> >
> > 28889 root      20   0 42808 1144  672 S    0  0.0   0:01.12
> > sshd
> >
> >  1262 root      16  -4 16912 1124  484 S    0  0.0   0:00.04
> > udevd
> >
> > 12034 root      20   0 10624 1100  848 R    0  0.0   0:04.90
> > top
> >
> >  3553 Debian-e  20   0 42716 1016  612 S    0  0.0   0:07.40
> > exim4
> >
> >  3591 root      20   0 19804  844  652 S    0  0.0   0:05.16
> > cron
> >
> >  2793 statd     20   0 10136  760  632 S    0  0.0   0:00.00
> > rpc.statd
> >
> >     1 root      20   0 10312  756  620 S    0  0.0   0:11.76
> > init
> >
> >  3020 root      20   0  3796  600  476 S    0  0.0   0:00.00
> > acpid
> >
> >  3616 root      20   0  3796  584  484 S    0  0.0   0:00.00
> > getty
> >
> >  3612 root      20   0  3796  580  484 S    0  0.0   0:00.00
> > getty
> >
> >  3617 root      20   0  3796  580  484 S    0  0.0   0:00.00
> > getty
> >
> >  3609 root      20   0  3796  576  484 S    0  0.0   0:00.00
> > getty
> >
> >  3613 root      20   0  3796  576  484 S    0  0.0   0:00.00
> > getty
> >
> >  2782 daemon    20   0  8020  536  416 S    0  0.0   0:00.00
> > portmap
> >
> >  3571 daemon    20   0 16356  444  296 S    0  0.0   0:02.40 atd
>
> Certainly nothing there appears to be using much ram.
>
> > So I understand there's nothing to worry. I should look at field RES so
> the
> > sum of all the processes is the correct physical memory usage . Is that
> > right?
>
> That's my understanding at least.
>
> > Another way to see how much memory is consuming the process could be ps
> -ef,
> > but it's a percentage. Taking this value, probably matches the REST value
> of
> > top command. Please correct me if I'm wrong.
>
> Well if your machine in fact is not using the memory for cache or
> buffers and none of the processes have it, then I would say one of your
> drivers or some other kernel part is leaking memory.  That would be bad.
> I wonder it that is the case.
>
> What is the output of 'free'?
>

Sorry, I should posted it before. This is also for server3, but 1 and 2 have
the same problem. Server4 is not a valid example because I rebooted a couple
of days ago and it seems to behave great for now.

                total        used           free        shared
buffers     cached
Mem:       3097764    2263292     834472          0      49272     107712
-/+ buffers/cache:    2106308     991456
Swap:      2928632          0    2928632

In this case, out of 3GB, 100MB is cache and 50MB buffers. Plus 1GB free
memory it means that the system is taking 2GB for I don't know what...


>
> In my case I get:
>              total       used       free     shared    buffers     cached
> Mem:      16473836   16364008     109828          0    4943552    8269128
> -/+ buffers/cache:    3151328   13322508
> Swap:     16777208        168   16777040
>
> So out of 16GB, 8GB is cache, 5GB buffers, and 3GB actually used.
> No problem then.  If everything is in fact being used, then that
> seems wrong, and I can only think there is a kernel bug leaking memory
> somewhere then.
>
> --
> Len Sorensen
>

Thank you for your time!

Reply via email to