On Mon, 25 Jul 2016 10:39:25 -0400 Kyle Walker <kwal...@redhat.com> wrote:

> Java workloads using the MappedByteBuffer library result in the fadvise()
> and madvise() syscalls being used extensively. Following recent readahead
> limiting alterations, such as 600e19af ("mm: use only per-device readahead
> limit") and 6d2be915 ("mm/readahead.c: fix readahead failure for
> memoryless NUMA nodes and limit readahead pages"), application performance
> suffers in instances where small readahead is configured.

Can this suffering be quantified please?

> By moving this limit outside of the syscall codepaths, the syscalls are
> able to advise an inordinately large amount of readahead when desired.
> With a cap being imposed based on the half of NR_INACTIVE_FILE and
> NR_FREE_PAGES. In essence, allowing performance tuning efforts to define a
> small readahead limit, but then benefiting from large sequential readahead
> values selectively.
> 
> ...
>
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -211,7 +211,9 @@ int force_page_cache_readahead(struct address_space 
> *mapping, struct file *filp,
>       if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages))
>               return -EINVAL;
>  
> -     nr_to_read = min(nr_to_read, inode_to_bdi(mapping->host)->ra_pages);
> +     nr_to_read = min(nr_to_read, (global_page_state(NR_INACTIVE_FILE) +
> +                                  (global_page_state(NR_FREE_PAGES)) / 2));
> +
>       while (nr_to_read) {
>               int err;
>  
> @@ -484,6 +486,7 @@ void page_cache_sync_readahead(struct address_space 
> *mapping,
>  
>       /* be dumb */
>       if (filp && (filp->f_mode & FMODE_RANDOM)) {
> +             req_size = min(req_size, inode_to_bdi(mapping->host)->ra_pages);
>               force_page_cache_readahead(mapping, filp, offset, req_size);
>               return;
>       }

Linus probably has opinions ;)

Reply via email to