On Fri, Dec 21, 2018 at 02:17:32PM -0800, Mike Kravetz wrote:
> Am I misunderstanding your question/concern?
No. Thanks for the clarification.
>
> I have decided to add the locking (although unnecessary) with something like
> this in hugetlbfs_evict_inode.
>
> /*
>* The vfs layer
On 12/21/18 12:21 PM, Kirill A. Shutemov wrote:
> On Fri, Dec 21, 2018 at 10:28:25AM -0800, Mike Kravetz wrote:
>> On 12/21/18 2:28 AM, Kirill A. Shutemov wrote:
>>> On Tue, Dec 18, 2018 at 02:35:57PM -0800, Mike Kravetz wrote:
Instead of writing the required complicated code for this rare
>>>
On Fri, Dec 21, 2018 at 10:28:25AM -0800, Mike Kravetz wrote:
> On 12/21/18 2:28 AM, Kirill A. Shutemov wrote:
> > On Tue, Dec 18, 2018 at 02:35:57PM -0800, Mike Kravetz wrote:
> >> Instead of writing the required complicated code for this rare
> >> occurrence, just eliminate the race. i_mmap_rwse
On 12/21/18 2:28 AM, Kirill A. Shutemov wrote:
> On Tue, Dec 18, 2018 at 02:35:57PM -0800, Mike Kravetz wrote:
>> Instead of writing the required complicated code for this rare
>> occurrence, just eliminate the race. i_mmap_rwsem is now held in read
>> mode for the duration of page fault processin
On Tue, Dec 18, 2018 at 02:35:57PM -0800, Mike Kravetz wrote:
> Instead of writing the required complicated code for this rare
> occurrence, just eliminate the race. i_mmap_rwsem is now held in read
> mode for the duration of page fault processing. Hold i_mmap_rwsem
> longer in truncation and hol
hugetlbfs page faults can race with truncate and hole punch operations.
Current code in the page fault path attempts to handle this by 'backing
out' operations if we encounter the race. One obvious omission in the
current code is removing a page newly added to the page cache. This is
pretty strai
6 matches
Mail list logo