On Tue, Mar 18, 2014 at 01:10:44PM +, Zuckerman, Boris wrote:
> X86 cache lines are much smaller than a page. Cache lined are flushed
> "naturally", but we do not know about that.
> How many Dirty pages do we anticipate? What is the performance cost of
> msync()? Is that higher, if we do page
Matthew,
First of all, thank you for doing this job!
Supporting persistent memory for any OS is bit more than adding "just another
device".
There are some thoughts and questions below. Perhaps, you discussed those
already. If so, please point me to that discussion!
> > Few questions:
> > - wh
On Mon, Mar 17, 2014 at 01:43:21PM +0200, Kirill A. Shutemov wrote:
> On Sat, Mar 15, 2014 at 10:46:13PM -0400, Matthew Wilcox wrote:
> > I'm actually working on this now. The basic idea is to put an entry in
> > the radix tree for each page. For zero pages, that's a pagecache page.
> > For pages
On Sat, Mar 15, 2014 at 10:46:13PM -0400, Matthew Wilcox wrote:
> On Sat, Mar 15, 2014 at 01:32:33AM +0200, Kirill A. Shutemov wrote:
> > Side note: I'm sceptical about whole idea to use i_mmap_mutux to protect
> > against truncate. It will not scale good enough comparing lock_page()
> > with its g
On Sat, Mar 15, 2014 at 01:32:33AM +0200, Kirill A. Shutemov wrote:
> Side note: I'm sceptical about whole idea to use i_mmap_mutux to protect
> against truncate. It will not scale good enough comparing lock_page()
> with its granularity.
I'm actually working on this now. The basic idea is to put
On Sat, 2014-03-15 at 01:32 +0200, Kirill A. Shutemov wrote:
> On Fri, Mar 14, 2014 at 05:03:19PM -0600, Toshi Kani wrote:
> > +void dax_map_pages(struct vm_area_struct *vma, struct vm_fault *vmf,
> > + get_block_t get_block)
> > +{
> > + struct file *file = vma->vm_file;
> > + struct
On Fri, Mar 14, 2014 at 05:03:19PM -0600, Toshi Kani wrote:
> +void dax_map_pages(struct vm_area_struct *vma, struct vm_fault *vmf,
> + get_block_t get_block)
> +{
> + struct file *file = vma->vm_file;
> + struct inode *inode = file_inode(file);
> + struct buffer_head bh;
>
DAX provides direct access to NVDIMM and bypasses the page caches.
Newly introduced map_pages() callback reduces page faults by adding
mappings around a faulted page, which is not supported for DAX.
This patch implements map_pages() callback for DAX. It reduces a
number of page faults and increas
8 matches
Mail list logo