On Fri, May 01, 2020 at 04:20:20AM +0100, Al Viro wrote:
> On Fri, May 01, 2020 at 03:37:34AM +0100, Al Viro wrote:
> > On Thu, Apr 30, 2020 at 01:38:44PM -0700, ira.we...@intel.com wrote:
> >
> > > -static inline void *kmap_atomic(struct page *page)
> > > +static inline void *kmap_atomic_prot(str
On Thu, Apr 30, 2020 at 01:38:44PM -0700, ira.we...@intel.com wrote:
> -static inline void *kmap_atomic(struct page *page)
> +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> {
> preempt_disable();
> pagefault_disable();
> if (!PageHighMem(page))
>
On Fri, May 01, 2020 at 03:37:34AM +0100, Al Viro wrote:
> On Thu, Apr 30, 2020 at 01:38:44PM -0700, ira.we...@intel.com wrote:
>
> > -static inline void *kmap_atomic(struct page *page)
> > +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> > {
> > preempt_disable();
>
From: Ira Weiny
To support kmap_atomic_prot(), all architectures need to support
protections passed to their kmap_atomic_high() function. Pass
protections into kmap_atomic_high() and change the name to
kmap_atomic_high_prot() to match.
Then define kmap_atomic_prot() as a core function which cal