Re: [PATCH 00/15] KVM: x86: Introduce new ioctl KVM_TRANSLATE2
On Tue, Sep 10, 2024, Nikolas Wipper wrote: > This series introduces a new ioctl KVM_TRANSLATE2, which expands on > KVM_TRANSLATE. It is required to implement Hyper-V's > HvTranslateVirtualAddress hyper-call as part of the ongoing effort to > emulate HyperV's Virtual Secure Mode (VSM) within KVM and QEMU. The hyper- > call requires several new KVM APIs, one of which is KVM_TRANSLATE2, which > implements the core functionality of the hyper-call. The rest of the > required functionality will be implemented in subsequent series. > > Other than translating guest virtual addresses, the ioctl allows the > caller to control whether the access and dirty bits are set during the > page walk. It also allows specifying an access mode instead of returning > viable access modes, which enables setting the bits up to the level that > caused a failure. Additionally, the ioctl provides more information about > why the page walk failed, and which page table is responsible. This > functionality is not available within KVM_TRANSLATE, and can't be added > without breaking backwards compatiblity, thus a new ioctl is required. ... > Documentation/virt/kvm/api.rst| 131 > arch/x86/include/asm/kvm_host.h | 18 +- > arch/x86/kvm/hyperv.c | 3 +- > arch/x86/kvm/kvm_emulate.h| 8 + > arch/x86/kvm/mmu.h| 10 +- > arch/x86/kvm/mmu/mmu.c| 7 +- > arch/x86/kvm/mmu/paging_tmpl.h| 80 +++-- > arch/x86/kvm/x86.c| 123 ++- > include/linux/kvm_host.h | 6 + > include/uapi/linux/kvm.h | 33 ++ > tools/testing/selftests/kvm/Makefile | 1 + > .../selftests/kvm/x86_64/kvm_translate2.c | 310 ++ > virt/kvm/kvm_main.c | 41 +++ > 13 files changed, 724 insertions(+), 47 deletions(-) > create mode 100644 tools/testing/selftests/kvm/x86_64/kvm_translate2.c ... > The simple reason for keeping this functionality in KVM, is that it already > has a mature, production-level page walker (which is already exposed) and > creating something similar QEMU would take a lot longer and would be much > harder to maintain than just creating an API that leverages the existing > walker. I'm not convinced that implementing targeted support in QEMU (or any other VMM) would be at all challenging or a burden to maintain. I do think duplicating functionality across multiple VMMs is undesirable, but that's an argument for creating modular userspace libraries for such functionality. E.g. I/O APIC emulation is another one I'd love to move to a common library. Traversing page tables isn't difficult. Checking permission bits isn't complex. Tedious, perhaps. But not complex. KVM's rather insane code comes from KVM's desire to make the checks as performant as possible, because eking out every little bit of performance matters for legacy shadow paging. I doubt VSM needs _that_ level of performance. I say "targeted", because I assume the only use case for VSM is 64-bit non-nested guests. QEMU already has a rudimentary supporting for walking guest page tables, and that code is all of 40 LoC. Granted, it's heinous and lacks permission checks and A/D updates, but I would expect a clean implementation with permission checks and A/D support would clock in around 200 LoC. Maybe 300. And ignoring docs and selftests, that's roughly what's being added in this series. Much of the code being added is quite simple, but there are non-trivial changes here as well. E.g. the different ways of setting A/D bits. My biggest concern is taking on ABI that restricts what KVM can do in its walker. E.g. I *really* don't like the PKU change. Yeah, Intel doesn't explicitly define architectural behavior, but diverging from hardware behavior is rarely a good idea. Similarly, the behavior of FNAME(protect_clean_gpte)() probably isn't desirable for the VSM use case.
Re: [PATCH 14/15] KVM: x86: Implement KVM_TRANSLATE2
On Tue, Sep 10, 2024, Nikolas Wipper wrote: > +int kvm_arch_vcpu_ioctl_translate2(struct kvm_vcpu *vcpu, > + struct kvm_translation2 *tr) > +{ > + int idx, set_bit_mode = 0, access = 0; > + struct x86_exception exception = { }; > + gva_t vaddr = tr->linear_address; > + u16 status = 0; > + gpa_t gpa; > + > + if (tr->flags & KVM_TRANSLATE_FLAGS_SET_ACCESSED) > + set_bit_mode |= PWALK_SET_ACCESSED; > + if (tr->flags & KVM_TRANSLATE_FLAGS_SET_DIRTY) > + set_bit_mode |= PWALK_SET_DIRTY; > + if (tr->flags & KVM_TRANSLATE_FLAGS_FORCE_SET_ACCESSED) > + set_bit_mode |= PWALK_FORCE_SET_ACCESSED; > + > + if (tr->access & KVM_TRANSLATE_ACCESS_WRITE) > + access |= PFERR_WRITE_MASK; > + if (tr->access & KVM_TRANSLATE_ACCESS_USER) > + access |= PFERR_USER_MASK; > + if (tr->access & KVM_TRANSLATE_ACCESS_EXEC) > + access |= PFERR_FETCH_MASK; WRITE and FETCH accesses need to be mutually exclusive.
Re: [PATCH v4 0/9] mm: workingset reporting
On Fri, 6 Dec 2024 11:57:55 -0800 Yuanchu Xie wrote: > Thanks for the response Johannes. Some replies inline. > > On Tue, Nov 26, 2024 at 11:26\u202fPM Johannes Weiner > wrote: > > > > On Tue, Nov 26, 2024 at 06:57:19PM -0800, Yuanchu Xie wrote: > > > This patch series provides workingset reporting of user pages in > > > lruvecs, of which coldness can be tracked by accessed bits and fd > > > references. However, the concept of workingset applies generically to > > > all types of memory, which could be kernel slab caches, discardable > > > userspace caches (databases), or CXL.mem. Therefore, data sources might > > > come from slab shrinkers, device drivers, or the userspace. > > > Another interesting idea might be hugepage workingset, so that we can > > > measure the proportion of hugepages backing cold memory. However, with > > > architectures like arm, there may be too many hugepage sizes leading to > > > a combinatorial explosion when exporting stats to the userspace. > > > Nonetheless, the kernel should provide a set of workingset interfaces > > > that is generic enough to accommodate the various use cases, and > > > extensible > > > to potential future use cases. > > > > Doesn't DAMON already provide this information? > > > > CCing SJ. > Thanks for the CC. DAMON was really good at visualizing the memory > access frequencies last time I tried it out! Thank you for this kind acknowledgement, Yuanchu! > For server use cases, > DAMON would benefit from integrations with cgroups. The key then would be a > standard interface for exporting a cgroup's working set to the user. I show two ways to make DAMON supports cgroups for now. First way is making another DAMON operations set implementation for cgroups. I shared a rough idea for this before, probably on kernel summit. But I haven't had a chance to prioritize this so far. Please let me know if you need more details. The second way is extending DAMOS filter to provide more detailed statistics per DAMON-region, and adding another DAMOS action that does nothing but only accounting the detailed statistics. Using the new DAMOS action, users will be able to know how much of specific DAMON-found regions are filtered out by the given filter. Because we have DAMOS filter type for cgroups, we can know how much of workingset (or, warm memory) belongs to specific groups. This can be applied to not only cgroups, but for any DAMOS filter types that exist (e.g., anonymous page, young page). I believe the second way is simpler to implement while providing information that sufficient for most possible use cases. I was anyway planning to do this. > It would be good to have something that will work for different > backing implementations, DAMON, MGLRU, or active/inactive LRU. I think we can do this using the filter statistics, with new filter types. For example, we can add new DAMOS filter that filters pages if it is for specific range of MGLRU-gen of the page, or whether the page belongs to active or inactive LRU lists. > > > > > > Use cases > > > == [...] > > Access frequency is only half the picture. Whether you need to keep > > memory with a given frequency resident depends on the speed of the > > backing device. [...] > > > Benchmarks > > > == > > > Ghait Ouled Amar Ben Cheikh has implemented a simple policy and ran Linux > > > compile and redis benchmarks from openbenchmarking.org. The policy and > > > runner is referred to as WMO (Workload Memory Optimization). > > > The results were based on v3 of the series, but v4 doesn't change the core > > > of the working set reporting and just adds the ballooning counterpart. > > > > > > The timed Linux kernel compilation benchmark shows improvements in peak > > > memory usage with a policy of "swap out all bytes colder than 10 seconds > > > every 40 seconds". A swapfile is configured on SSD. [...] > > You can do this with a recent (>2018) upstream kernel and ~100 lines > > of python [1]. It also works on both LRU implementations. > > > > [1] https://github.com/facebookincubator/senpai > > > > We use this approach in virtually the entire Meta fleet, to offload > > unneeded memory, estimate available capacity for job scheduling, plan > > future capacity needs, and provide accurate memory usage feedback to > > application developers. > > > > It works over a wide variety of CPU and storage configurations with no > > specific tuning. > > > > The paper I referenced above provides a detailed breakdown of how it > > all works together. > > > > I would be curious to see a more in-depth comparison to the prior art > > in this space. At first glance, your proposal seems more complex and > > less robust/versatile, at least for offloading and capacity gauging. > We have implemented TMO PSI-based proactive reclaim and compared it to > a kstaled-based reclaimer (reclaiming based on 2 minute working set > and refaults). The PSI-based reclaimer was able to save more memory, > but it also caused spikes of
Re: [PATCH v3 2/9] arm64/sysreg: Update ID_AA64ISAR3_EL1 to DDI0601 2024-09
On Tue, Dec 10, 2024 at 06:43:05PM +, Mark Brown wrote: > On Tue, Dec 10, 2024 at 05:09:55PM +, Will Deacon wrote: > > > Can we _please_ just generate this stuff. It feels like we've been > > making silly typos over and over again with the current approach so > > either it's hard or we're not very good at it. Either way, it should be > > automated. > > > Others have managed it [1], so it's clearly do-able. > > Yes, the issues here are not technical ones. Though there are some > complications - eg, IIRC the XML doesn't encode the signedness of > fields like we do and there's areas where we've deliberately diverged. > Given the amount of review I end up having to do of sysreg changes your > reasoning is especially apparent to me. I've passed this feedback on > (again). One thing we _could_ do is have a tool (in-tree) that takes two copies of the sysreg file (i.e. before and after applying a diff) along with a copy of the XML and, for the the new fields being added, shows how the XML represents those compared to the diff. It should then be relatively straightforward to flag the use of an unallocated encoding (like we had here) and also things like assigning a field name to a RES0 region. So this wouldn't be generating the patches from the XML, but more like using the XML as an oracle in a linter. Will
RE: [PATCH v2 12/13] iommu/arm-smmu-v3: Introduce struct arm_smmu_vmaster
> From: Nicolin Chen > Sent: Wednesday, December 4, 2024 6:10 AM > > Use it to store all vSMMU-related data. The vsid (Virtual Stream ID) will > be the first use case. Then, add a rw_semaphore to protect it. > > Also add a pair of arm_smmu_attach_prepare/commit_vmaster helpers and > put > them in the existing arm_smmu_attach_prepare/commit(). Note that identity > and blocked ops don't call arm_smmu_attach_prepare/commit(), thus > simply > call the new helpers at the top. Probably a dumb question. viommu is tied to a nested parent domain which cannot be identity or blocked. Why do we need to change them too?
RE: [PATCH v2 13/13] iommu/arm-smmu-v3: Report IRQs that belong to devices attached to vIOMMU
> From: Nicolin Chen > Sent: Wednesday, December 4, 2024 6:10 AM > > + > +/** > + * struct iommu_virq_arm_smmuv3 - ARM SMMUv3 Virtual IRQ > + *(IOMMU_VIRQ_TYPE_ARM_SMMUV3) > + * @evt: 256-bit ARM SMMUv3 Event record, little-endian. > + * > + * StreamID field reports a virtual device ID. To receive a virtual IRQ for a > + * device, a vDEVICE must be allocated via IOMMU_VDEVICE_ALLOC. > + */ similar to what's provided for iommu_hw_info_arm_smmuv3, it'd be good to refer to a section in smmu spec for bit definitions. > @@ -1779,33 +1779,6 @@ static int arm_smmu_handle_evt(struct > arm_smmu_device *smmu, u64 *evt) > return -EOPNOTSUPP; > } > > - if (!(evt[1] & EVTQ_1_STALL)) > - return -EOPNOTSUPP; > - > - if (evt[1] & EVTQ_1_RnW) > - perm |= IOMMU_FAULT_PERM_READ; > - else > - perm |= IOMMU_FAULT_PERM_WRITE; > - > - if (evt[1] & EVTQ_1_InD) > - perm |= IOMMU_FAULT_PERM_EXEC; > - > - if (evt[1] & EVTQ_1_PnU) > - perm |= IOMMU_FAULT_PERM_PRIV; > - > - flt->type = IOMMU_FAULT_PAGE_REQ; > - flt->prm = (struct iommu_fault_page_request) { > - .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, > - .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), > - .perm = perm, > - .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), > - }; > - > - if (ssid_valid) { > - flt->prm.flags |= > IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; > - flt->prm.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); > - } > - > mutex_lock(&smmu->streams_mutex); > master = arm_smmu_find_master(smmu, sid); > if (!master) { > @@ -1813,7 +1786,40 @@ static int arm_smmu_handle_evt(struct > arm_smmu_device *smmu, u64 *evt) > goto out_unlock; > } > > - ret = iommu_report_device_fault(master->dev, &fault_evt); > + down_read(&master->vmaster_rwsem); this lock is not required if event is EVTQ_1_STALL? > + if (evt[1] & EVTQ_1_STALL) { > + if (evt[1] & EVTQ_1_RnW) > + perm |= IOMMU_FAULT_PERM_READ; > + else > + perm |= IOMMU_FAULT_PERM_WRITE; > + > + if (evt[1] & EVTQ_1_InD) > + perm |= IOMMU_FAULT_PERM_EXEC; > + > + if (evt[1] & EVTQ_1_PnU) > + perm |= IOMMU_FAULT_PERM_PRIV; > + > + flt->type = IOMMU_FAULT_PAGE_REQ; > + flt->prm = (struct iommu_fault_page_request){ > + .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, > + .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), > + .perm = perm, > + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), > + }; > + > + if (ssid_valid) { > + flt->prm.flags |= > IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; > + flt->prm.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); > + } > + > + ret = iommu_report_device_fault(master->dev, &fault_evt); > + } else if (master->vmaster && !(evt[1] & EVTQ_1_S2)) { > + ret = arm_vmaster_report_event(master->vmaster, evt); > + } else { > + /* Unhandled events should be pinned */ > + ret = -EFAULT; > + } > + up_read(&master->vmaster_rwsem); > out_unlock: > mutex_unlock(&smmu->streams_mutex); > return ret; > -- > 2.43.0
RE: [PATCH v2 11/13] Documentation: userspace-api: iommufd: Update EVENTQ_IOPF and EVENTQ_VIRQ
> From: Nicolin Chen > Sent: Wednesday, December 4, 2024 6:10 AM > > With the introduction of the new objects, update the doc to reflect that. > > Signed-off-by: Nicolin Chen > --- > Documentation/userspace-api/iommufd.rst | 19 +++ > 1 file changed, 19 insertions(+) > > diff --git a/Documentation/userspace-api/iommufd.rst > b/Documentation/userspace-api/iommufd.rst > index 70289d6815d2..798520d9344d 100644 > --- a/Documentation/userspace-api/iommufd.rst > +++ b/Documentation/userspace-api/iommufd.rst > @@ -63,6 +63,14 @@ Following IOMMUFD objects are exposed to userspace: >space usually has mappings from guest-level I/O virtual addresses to guest- >level physical addresses. > > +- IOMMUFD_OBJ_EVENTQ_IOPF, representing a software queue for an > HWPT_NESTED now it can be used on paging hwpt too > + reporting IO Page Fault using the IOMMU HW's PRI (Page Request > Interface). > + This queue object provides user space an FD to poll the page fault events > + and also to respond to those events. An EVENTQ_IOPF object must be > created > + first to get a fault_id that could be then used to allocate an HWPT_NESTED > + via the IOMMU_HWPT_ALLOC command setting > IOMMU_HWPT_FAULT_ID_VALID set in > + its flags field. > + > - IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU > instance, >passed to or shared with a VM. It may be some HW-accelerated > virtualization >features and some SW resources used by the VM. For examples: > @@ -109,6 +117,15 @@ Following IOMMUFD objects are exposed to > userspace: >vIOMMU, which is a separate ioctl call from attaching the same device to an >HWPT_PAGING that the vIOMMU holds. > > +- IOMMUFD_OBJ_EVENTQ_VIRQ, representing a software queue for > IOMMUFD_OBJ_VIOMMU > + reporting its non-affiliated events, such as translation faults occurred > to a non-affiliated is only mentioned here. It's not a standard term in this area. Sticking to the later examples in 'such as' is straightforward. > + nested stage-1 and HW-specific events/irqs e.g. events to invalidation > queues > + that are assigned to VMs via vIOMMUs. This queue object provides user vcmdq is not supported yet. add it later. > space an > + FD to poll the vIOMMU events. A vIOMMU object must be created first to > get its > + viommu_id that could be then used to allocate an EVENTQ_VIRQ. Each > vIOMMU can > + support multiple types of EVENTQ_VIRQs, but is confined to one > EVENTQ_VIRQ per > + vIRQ type. > + > All user-visible objects are destroyed via the IOMMU_DESTROY uAPI. > > The diagrams below show relationships between user-visible objects and > kernel > @@ -251,8 +268,10 @@ User visible objects are backed by following > datastructures: > - iommufd_device for IOMMUFD_OBJ_DEVICE. > - iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING. > - iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED. > +- iommufd_eventq_iopf for IOMMUFD_OBJ_EVENTQ_IOPF. > - iommufd_viommu for IOMMUFD_OBJ_VIOMMU. > - iommufd_vdevice for IOMMUFD_OBJ_VDEVICE. > +- iommufd_eventq_virq for IOMMUFD_OBJ_EVENTQ_VIRQ. > > Several terminologies when looking at these datastructures: > > -- > 2.43.0
RE: [PATCH v2 06/13] iommufd/viommu: Add iommufd_viommu_get_vdev_id helper
> From: Nicolin Chen > Sent: Wednesday, December 4, 2024 6:10 AM > > +/* Return 0 if device is not associated to the vIOMMU */ > +unsigned long iommufd_viommu_get_vdev_id(struct iommufd_viommu > *viommu, > + struct device *dev) > +{ > + struct iommufd_vdevice *vdev; > + unsigned long vdev_id = 0; > + unsigned long index; > + > + xa_lock(&viommu->vdevs); > + xa_for_each(&viommu->vdevs, index, vdev) { > + if (vdev && vdev->dev == dev) xa_for_each only find valid entries, so if (vdev) is redundant? > + vdev_id = (unsigned long)vdev->id; break out of the loop if hit. > + } > + xa_unlock(&viommu->vdevs); > + return vdev_id; > +} > +EXPORT_SYMBOL_NS_GPL(iommufd_viommu_get_vdev_id, IOMMUFD); > + > MODULE_DESCRIPTION("iommufd code shared with builtin modules"); > MODULE_LICENSE("GPL"); > -- > 2.43.0
RE: [PATCH v2 07/13] iommufd/viommu: Add iommufd_viommu_report_irq helper
> From: Nicolin Chen > Sent: Wednesday, December 4, 2024 6:10 AM > > +/* Typically called in driver's threaded IRQ handler */ > +int iommufd_viommu_report_irq(struct iommufd_viommu *viommu, > unsigned int type, > + void *irq_ptr, size_t irq_len) > +{ > + struct iommufd_eventq_virq *eventq_virq; > + struct iommufd_virq *virq; > + int rc = 0; > + > + might_sleep(); why is it required here but not in the iopf path? > + > + if (!viommu) > + return -ENODEV; > + if (WARN_ON_ONCE(!irq_len || !irq_ptr)) > + return -EINVAL; > + > + down_read(&viommu->virqs_rwsem); > + > + eventq_virq = iommufd_viommu_find_eventq_virq(viommu, type); > + if (!eventq_virq) { > + rc = -EOPNOTSUPP; > + goto out_unlock_vdev_ids; s/out_unlock_vdev_ids/out_unlock_virqs/ > + } > + > + virq = kzalloc(sizeof(*virq) + irq_len, GFP_KERNEL); > + if (!virq) { > + rc = -ENOMEM; > + goto out_unlock_vdev_ids; > + } > + virq->irq_data = (void *)virq + sizeof(*virq); > + memcpy(virq->irq_data, irq_ptr, irq_len); > + > + virq->eventq_virq = eventq_virq; > + virq->irq_len = irq_len; > + > + iommufd_eventq_virq_handler(virq); > +out_unlock_vdev_ids: > + up_read(&viommu->virqs_rwsem); > + return rc; > +} > +EXPORT_SYMBOL_NS_GPL(iommufd_viommu_report_irq, IOMMUFD); > + > MODULE_DESCRIPTION("iommufd code shared with builtin modules"); > MODULE_LICENSE("GPL"); > -- > 2.43.0
Re: [PATCH v6 28/28] ntsync: No longer depend on BROKEN.
On Thu, Dec 12, 2024, at 05:52, kernel test robot wrote: > Hi Elizabeth, > > kernel test robot noticed the following build errors: > > [auto build test ERROR on cdd30ebb1b9f36159d66f088b61aee264e649d7a] > > url: > https://github.com/intel-lab-lkp/linux/commits/Elizabeth-Figura/ntsync-Introduce-NTSYNC_IOC_WAIT_ANY/20241210-031155 > base: cdd30ebb1b9f36159d66f088b61aee264e649d7a > All errors (new ones prefixed by >>): > >In file included from include/linux/spinlock.h:60, > from include/linux/wait.h:9, > from include/linux/wait_bit.h:8, > from include/linux/fs.h:6, > from drivers/misc/ntsync.c:11: >In function 'check_copy_size', >inlined from 'copy_from_user' at include/linux/uaccess.h:207:7, >inlined from 'setup_wait' at drivers/misc/ntsync.c:903:6: >>> include/linux/thread_info.h:259:25: error: call to '__bad_copy_to' declared >>> with attribute error: copy destination size is too small > 259 | __bad_copy_to(); > | ^~~ I looked up the function from the github URL above and found int fds[NTSYNC_MAX_WAIT_COUNT + 1]; const __u32 count = args->count; struct ntsync_q *q; __u32 total_count; __u32 i, j; if (args->pad || (args->flags & ~NTSYNC_WAIT_REALTIME)) return -EINVAL; if (args->count > NTSYNC_MAX_WAIT_COUNT) return -EINVAL; total_count = count; if (args->alert) total_count++; if (copy_from_user(fds, u64_to_user_ptr(args->objs), array_size(count, sizeof(*fds return -EFAULT; which looks correct to me, as it has appropriate range checking on args->count, but I can see how the warning may be a result of checking 'args->count' instead of 'count'. Arnd
Re: [PATCH v6 28/28] ntsync: No longer depend on BROKEN.
Hi Elizabeth, kernel test robot noticed the following build errors: [auto build test ERROR on cdd30ebb1b9f36159d66f088b61aee264e649d7a] url: https://github.com/intel-lab-lkp/linux/commits/Elizabeth-Figura/ntsync-Introduce-NTSYNC_IOC_WAIT_ANY/20241210-031155 base: cdd30ebb1b9f36159d66f088b61aee264e649d7a patch link: https://lore.kernel.org/r/20241209185904.507350-29-zfigura%40codeweavers.com patch subject: [PATCH v6 28/28] ntsync: No longer depend on BROKEN. config: i386-randconfig-002-20241212 (https://download.01.org/0day-ci/archive/20241212/202412121219.eqhubn0s-...@intel.com/config) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241212/202412121219.eqhubn0s-...@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202412121219.eqhubn0s-...@intel.com/ All errors (new ones prefixed by >>): In file included from include/linux/spinlock.h:60, from include/linux/wait.h:9, from include/linux/wait_bit.h:8, from include/linux/fs.h:6, from drivers/misc/ntsync.c:11: In function 'check_copy_size', inlined from 'copy_from_user' at include/linux/uaccess.h:207:7, inlined from 'setup_wait' at drivers/misc/ntsync.c:903:6: >> include/linux/thread_info.h:259:25: error: call to '__bad_copy_to' declared >> with attribute error: copy destination size is too small 259 | __bad_copy_to(); | ^~~ vim +/__bad_copy_to +259 include/linux/thread_info.h b0377fedb652808 Al Viro 2017-06-29 248 9dd819a15162f8f Kees Cook 2019-09-25 249 static __always_inline __must_check bool b0377fedb652808 Al Viro 2017-06-29 250 check_copy_size(const void *addr, size_t bytes, bool is_source) b0377fedb652808 Al Viro 2017-06-29 251 { c80d92fbb67b2c8 Kees Cook 2021-06-17 252 int sz = __builtin_object_size(addr, 0); b0377fedb652808 Al Viro 2017-06-29 253 if (unlikely(sz >= 0 && sz < bytes)) { b0377fedb652808 Al Viro 2017-06-29 254 if (!__builtin_constant_p(bytes)) b0377fedb652808 Al Viro 2017-06-29 255 copy_overflow(sz, bytes); b0377fedb652808 Al Viro 2017-06-29 256 else if (is_source) b0377fedb652808 Al Viro 2017-06-29 257 __bad_copy_from(); b0377fedb652808 Al Viro 2017-06-29 258 else b0377fedb652808 Al Viro 2017-06-29 @259 __bad_copy_to(); b0377fedb652808 Al Viro 2017-06-29 260 return false; b0377fedb652808 Al Viro 2017-06-29 261 } 6d13de1489b6bf5 Kees Cook 2019-12-04 262 if (WARN_ON_ONCE(bytes > INT_MAX)) 6d13de1489b6bf5 Kees Cook 2019-12-04 263 return false; b0377fedb652808 Al Viro 2017-06-29 264 check_object_size(addr, bytes, is_source); b0377fedb652808 Al Viro 2017-06-29 265 return true; b0377fedb652808 Al Viro 2017-06-29 266 } b0377fedb652808 Al Viro 2017-06-29 267 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki