On 3/11/2013 7:52 AM, Jan Kara wrote:
Yep, that's because it isn't implemented.
Why do you think so? AFAICS it is implemented by setting VM_RAND_READ
flag in the VMA and do_async_mmap_readahead() and do_sync_mmap_readahead()
check for the flag and don't do anything if it is set...
Oh, don't
Jan Kara wrote:
On Fri 08-03-13 12:04:46, Howard Chu wrote:
The test clearly is accessing only 30GB of data. Once slapd reaches
this process size, the test can be stopped and restarted any number
of times, run for any number of hours continuously, and memory use
on the system is unchanged, and n
On Fri 08-03-13 12:04:46, Howard Chu wrote:
> Johannes Weiner wrote:
> >On Fri, Mar 08, 2013 at 07:00:55AM -0800, Howard Chu wrote:
> >>Chris Friesen wrote:
> >>>On 03/08/2013 03:40 AM, Howard Chu wrote:
> >>>
> There is no way that a process that is accessing only 30GB of a mmap
> should b
On Fri 08-03-13 20:22:19, Phillip Susi wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 03/08/2013 10:00 AM, Howard Chu wrote:
> > Yes, that's what I was thinking. I added a
> > posix_madvise(..POSIX_MADV_RANDOM) but that had no effect on the
> > test.
>
> Yep, that's because it i
Hi Johannes,
On 03/09/2013 12:16 AM, Johannes Weiner wrote:
On Fri, Mar 08, 2013 at 07:00:55AM -0800, Howard Chu wrote:
Chris Friesen wrote:
On 03/08/2013 03:40 AM, Howard Chu wrote:
There is no way that a process that is accessing only 30GB of a mmap
should be able to fill up 32GB of RAM. Th
Hi Johannes,
On 03/08/2013 10:08 AM, Johannes Weiner wrote:
On Thu, Mar 07, 2013 at 04:43:12PM +0100, Jan Kara wrote:
Added mm list to CC.
On Tue 05-03-13 09:57:34, Howard Chu wrote:
I'm testing our memory-mapped database code on a small VM. The
machine has 32GB of RAM and the size of the D
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/08/2013 10:00 AM, Howard Chu wrote:
> Yes, that's what I was thinking. I added a
> posix_madvise(..POSIX_MADV_RANDOM) but that had no effect on the
> test.
Yep, that's because it isn't implemented.
You might try MADV_WILLNEED to schedule it to
Johannes Weiner wrote:
On Fri, Mar 08, 2013 at 07:00:55AM -0800, Howard Chu wrote:
Chris Friesen wrote:
On 03/08/2013 03:40 AM, Howard Chu wrote:
There is no way that a process that is accessing only 30GB of a mmap
should be able to fill up 32GB of RAM. There's nothing else running on
the mac
On Fri, Mar 08, 2013 at 07:00:55AM -0800, Howard Chu wrote:
> Chris Friesen wrote:
> >On 03/08/2013 03:40 AM, Howard Chu wrote:
> >
> >>There is no way that a process that is accessing only 30GB of a mmap
> >>should be able to fill up 32GB of RAM. There's nothing else running on
> >>the machine, I'
On 03/08/2013 09:00 AM, Howard Chu wrote:
First obvious conclusion - kswapd is being too aggressive. When free
memory hits the low watermark, the reclaim shrinks slapd down from 25GB
to 18-19GB, while the page cache still contains ~7GB of unmapped pages.
Ideally I'd like a tuning knob so I can s
Chris Friesen wrote:
On 03/08/2013 03:40 AM, Howard Chu wrote:
There is no way that a process that is accessing only 30GB of a mmap
should be able to fill up 32GB of RAM. There's nothing else running on
the machine, I've killed or suspended everything else in userland
besides a couple shells ru
On 03/08/2013 03:40 AM, Howard Chu wrote:
There is no way that a process that is accessing only 30GB of a mmap
should be able to fill up 32GB of RAM. There's nothing else running on
the machine, I've killed or suspended everything else in userland
besides a couple shells running top and vmstat.
Kirill A. Shutemov wrote:
On Thu, Mar 07, 2013 at 11:46:39PM -0800, Howard Chu wrote:
You're misreading the information then. slapd is doing no caching of
its own, its RSS and SHR memory size are both the same. All it is
using is the mmap, nothing else. The RSS == SHR == FS cache, up to
16GB. RS
On Thu, Mar 07, 2013 at 11:46:39PM -0800, Howard Chu wrote:
> You're misreading the information then. slapd is doing no caching of
> its own, its RSS and SHR memory size are both the same. All it is
> using is the mmap, nothing else. The RSS == SHR == FS cache, up to
> 16GB. RSS is always == SHR, b
Johannes Weiner wrote:
On Thu, Mar 07, 2013 at 04:43:12PM +0100, Jan Kara wrote:
2 questions:
why is there data in the FS cache that isn't owned by (the mmap
of) the process that caused it to be paged in in the first place?
The filesystem cache is shared among processes because the filesy
On Thu, Mar 07, 2013 at 04:43:12PM +0100, Jan Kara wrote:
> Added mm list to CC.
>
> On Tue 05-03-13 09:57:34, Howard Chu wrote:
> > I'm testing our memory-mapped database code on a small VM. The
> > machine has 32GB of RAM and the size of the DB on disk is ~44GB. The
> > database library mmaps
Added mm list to CC.
On Tue 05-03-13 09:57:34, Howard Chu wrote:
> I'm testing our memory-mapped database code on a small VM. The
> machine has 32GB of RAM and the size of the DB on disk is ~44GB. The
> database library mmaps the entire file as a single region and starts
> accessing it as a tree
Howard Chu wrote:
Howard Chu wrote:
Howard Chu wrote:
2 questions:
why is there data in the FS cache that isn't owned by (the mmap of) the
process that caused it to be paged in in the first place?
is there a tunable knob to discourage the page cache from stealing from
the
process?
Howard Chu wrote:
Howard Chu wrote:
2 questions:
why is there data in the FS cache that isn't owned by (the mmap of) the
process that caused it to be paged in in the first place?
is there a tunable knob to discourage the page cache from stealing from the
process?
This Unmapped page c
Howard Chu wrote:
2 questions:
why is there data in the FS cache that isn't owned by (the mmap of) the
process that caused it to be paged in in the first place?
is there a tunable knob to discourage the page cache from stealing from the
process?
This Unmapped page cache control http://l
I'm testing our memory-mapped database code on a small VM. The machine has
32GB of RAM and the size of the DB on disk is ~44GB. The database library
mmaps the entire file as a single region and starts accessing it as a tree of
B+trees. Running on an Ubuntu 3.5.0-23 kernel, XFS on a local disk.
21 matches
Mail list logo