Arriving at read_kmem() with an offset representing a bogus kernel address (e.g. 0 from a simple "cat /dev/kmem") leads to copy_to_user faulting on the kernel-space read.
x86_64 happens to get away with this since the optimised implementation uses "rep movs*", thus the user write (which is allowed to fault) and the kernel read are the same instruction, the kernel-side fault falls into the userspace fixup handler and a chain of events transpires leading to returning the expected -EFAULT. On other architectures, though, the read is not covered by the fixup entry for the write, and we get a straightforward "Unable to hande kernel paging request..." dump. The more typical use-case of mmap_kmem() already validates the address with pfn_valid() as one might expect, so let's make that consistent across {read,write}_kem() too. Reported-by: Kefeng Wang <wangkefeng.w...@huawei.com> Signed-off-by: Robin Murphy <robin.mur...@arm.com> --- I'm not sure if this warrants going to stable or not, as it's really just making an existing failure case more graceful and less confusing. drivers/char/mem.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 71025c2f6bbb..64c766023b15 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -384,6 +384,9 @@ static ssize_t read_kmem(struct file *file, char __user *buf, char *kbuf; /* k-addr because vread() takes vmlist_lock rwlock */ int err = 0; + if (!pfn_valid(PFN_DOWN(p))) + return -EFAULT; + read = 0; if (p < (unsigned long) high_memory) { low_count = count; @@ -512,6 +515,9 @@ static ssize_t write_kmem(struct file *file, const char __user *buf, char *kbuf; /* k-addr because vwrite() takes vmlist_lock rwlock */ int err = 0; + if (!pfn_valid(PFN_DOWN(p))) + return -EFAULT; + if (p < (unsigned long) high_memory) { unsigned long to_write = min_t(unsigned long, count, (unsigned long)high_memory - p); -- 2.8.1.dirty