On Tue, 27 Mar 2007 11:23:06 +0200 Miklos Szeredi <[EMAIL PROTECTED]> wrote:

> > > > > But Peter Staubach says a RH custumer has files written thorugh mmap,
> > > > > which are not being backed up.
> > > > 
> > > > Yes, I expect the backup problem is the major real-world hurt arising 
> > > > from
> > > > this bug.
> > > > 
> > > > But I expect we could adequately plug that problem at munmap()-time.  
> > > > Or,
> > > > better, do_wp_page().  As I said - half-assed.
> > > > 
> > > > It's a question if whether the backup problem is the only thing which 
> > > > is hurting
> > > > in the real-world, or if people have other problems.
> > > > 
> > > > (In fact, what's wrong with doing it in do_wp_page()?
> > > 
> > > It's rather more expensive, than just toggling a bit.
> > 
> > It shouldn't be, especially for filesystems which have one-second timestamp
> > granularity.
> > 
> > Filesystems which have s_time_gran=1 might hurt a bit, but no more than
> > they will with write().
> > 
> > Actually, no - we'd only update the mctime once per page per writeback
> > period (30 seconds by default) so the load will be small.
> 
> Why?  For each faulted page the times will be updated, no?

Yes, but only at pagefault-time.  And

- the faults are already "slow": we need to pull the page contents in
  from disk, or memset or cow the page

- we need to take the trap

compared to which, the cost of the timestamp update will (we hope) be
relatively low.

> Maybe it's acceptable, I don't really know the cost of
> file_update_time().
> 
> Tried this patch, and it seems to work.  It will even randomly update
> the time for tmpfs files (on initial fault, and on swapins).
> 
> Miklos
> 
> Index: linux/mm/memory.c
> ===================================================================
> --- linux.orig/mm/memory.c    2007-03-27 11:04:40.000000000 +0200
> +++ linux/mm/memory.c 2007-03-27 11:08:19.000000000 +0200
> @@ -1664,6 +1664,8 @@ gotten:
>  unlock:
>       pte_unmap_unlock(page_table, ptl);
>       if (dirty_page) {
> +             if (vma->vm_file)
> +                     file_update_time(vma->vm_file);
>               set_page_dirty_balance(dirty_page);
>               put_page(dirty_page);
>       }
> @@ -2316,6 +2318,8 @@ retry:
>  unlock:
>       pte_unmap_unlock(page_table, ptl);
>       if (dirty_page) {
> +             if (vma->vm_file)
> +                     file_update_time(vma->vm_file);
>               set_page_dirty_balance(dirty_page);
>               put_page(dirty_page);
>       }

that's simpler ;) Is it correct enough though?  The place where it will
become inaccurate is for repeated modification via an established map.  ie:

        p = mmap(..., MAP_SHARED);
        for ( ; ; )
                *p = 1;

in which case I think the timestamp will only get updated once per
writeback interval (ie: 30 seconds).


tmpfs files have an s_time_gran of 1, so benchmarking some workload on
tmpfs with this patch will tell us the worst-case overhead of the change. 

I guess we should arrange for multiple CPUs to perform write faults against
multiple pages of the same file in parallel.  Of course, that'd be a pretty
darn short benchmark because it'll run out of RAM.  Which reveals why we
probably won't have a performance problem in there.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to