[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Koenig, Christian <[email protected]>
> Sent: Thursday, March 12, 2026 5:18 PM
> To: Zhang, Jesse(Jie) <[email protected]>; [email protected]
> Cc: Deucher, Alexander <[email protected]>
> Subject: Re: [PATCH] drm/amdgpu: add overflow check for BO list array
> allocation
>
> On 3/12/26 09:33, Zhang, Jesse(Jie) wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> >> -----Original Message-----
> >> From: Koenig, Christian <[email protected]>
> >> Sent: Thursday, March 12, 2026 4:23 PM
> >> To: Zhang, Jesse(Jie) <[email protected]>;
> >> [email protected]
> >> Cc: Deucher, Alexander <[email protected]>
> >> Subject: Re: [PATCH] drm/amdgpu: add overflow check for BO list array
> >> allocation
> >>
> >> On 3/12/26 09:18, Jesse.Zhang wrote:
> >>> When allocating memory for a BO list array, the multiplication
> >>> bo_number * info_size may overflow on 32-bit systems if userspace
> >>> supplies large values. This could lead to allocating a smaller
> >>> buffer than expected, followed by a memset or copy_from_user that
> >>> writes beyond the allocated memory, potentially causing memory
> >>> corruption or information disclosure.
> >>>
> >>> Add an overflow check using check_mul_overflow to detect such cases.
> >>> Also ensure the resulting allocation size does not exceed INT_MAX,
> >>> as the subsequent user copy operations may rely on this limit.
> >>> Return -EINVAL if either condition fails.
> >>
> >> That is completely unnecessary, vmemdup_array_user() already does that
> check.
> >>
> >>>
> >>> A crash log illustrating the issue:
> >>>
> >>> [ 2943.053706] RIP: 0010:__kvmalloc_node_noprof+0x5be/0x8a0
> >>> ...
> >>> [ 2943.053725] Call Trace:
> >>> [ 2943.053728] amdgpu_bo_create_list_entry_array+0x42/0x130 [amdgpu]
> >>> [ 2943.053947] amdgpu_bo_list_ioctl+0x51/0x300 [amdgpu] [
> >>> 2943.054277]
> >>> drm_ioctl+0x2cb/0x5a0 [drm] [ 2943.054379] __x64_sys_ioctl+0x9e/0xf0
> >>>
> >>> The overflow occurs in the allocation inside
> >>> amdgpu_bo_create_list_entry_array, leading to a crash in
> >>> vmemdup_user (via __kvmalloc_node_noprof).
> >>
> >> How and on which kernel can you reproduce that?
> > We are developing some fuzz tests for the unified project.
> > The tests involve passing different levels of garbage data and ensuring the
> > kernel
> can handle this data correctly.
> > This issue can be reproduced on the amd-staging-drm-next branch.
>
> Do you have the full backtrace?
Yes,
[ 2943.053649] WARNING: mm/slub.c:7152 at __kvmalloc_node_noprof+0x5be/0x8a0,
CPU#13: amd_fuzzing/2765
[ 2943.053655] Modules linked in: nls_iso8859_1 amdgpu(OE) amdxcp
drm_panel_backlight_quirks gpu_sched drm_buddy drm_ttm_helper ttm drm_exec
drm_suballoc_helper drm_client_lib drm_display_helper cec rc_core
drm_kms_helper i2c_algo_bit rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace
netfs nf_conntrack_netlink xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE
bridge stp llc xt_set ip_set nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6
nf_defrag_ipv4 xt_addrtype nft_compat x_tables nf_tables xfrm_user xfrm_algo
overlay qrtr sunrpc binfmt_misc amd_atl intel_rapl_msr intel_rapl_common
snd_hda_codec_alc882 snd_hda_codec_realtek_lib snd_hda_codec_generic
edac_mce_amd snd_hda_codec_atihdmi snd_hda_codec_hdmi kvm_amd snd_hda_intel
snd_hda_codec kvm snd_hda_core snd_seq_midi snd_intel_dspcfg snd_seq_midi_event
snd_intel_sdw_acpi snd_rawmidi snd_hwdep ghash_clmulni_intel snd_pcm snd_seq
aesni_intel snd_pci_acp5x snd_seq_device i2c_piix4 wmi_bmof rapl snd_timer
snd_rn_pci_acp3x snd_acp_config snd k10temp i2c_smbus snd_soc_acpi ccp
[ 2943.053688] soundcore snd_pci_acp3x input_leds joydev mac_hid sch_fq_codel
drm(E) msr parport_pc ppdev lp parport efi_pstore nfnetlink dmi_sysfs autofs4
cdc_ether usbnet r8152 mii hid_generic usbhid hid nvme video nvme_core wmi
[ 2943.053700] CPU: 13 UID: 0 PID: 2765 Comm: amd_fuzzing Tainted: G
OE 6.19.0+ #79 PREEMPT(voluntary)
[ 2943.053703] Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[ 2943.053703] Hardware name: AMD Splinter/Splinter-GNR, BIOS
TS51202B_Patch0B_432 12/10/2025
[ 2943.053704] RIP: 0010:__kvmalloc_node_noprof+0x5be/0x8a0
[ 2943.053706] Code: fc ff ff 65 ff 0d 02 50 68 02 0f 85 9d fe ff ff 0f 1f 44
00 00 e9 93 fe ff ff 45 31 db 41 81 e6 00 20 00 00 0f 85 f5 fc ff ff <0f> 0b e9
ee fc ff ff 65 8b 05 d8 4f 68 02 48 0f a3 05 7c 7f 17 02
[ 2943.053707] RSP: 0018:ffffcc49868cb8b0 EFLAGS: 00010246
[ 2943.053709] RAX: 00000000007fffff RBX: ffff89bdc17dae00 RCX: 0000000000000017
[ 2943.053710] RDX: 0000000000000001 RSI: ffffffffbc0c9f6e RDI: 0000000000002000
[ 2943.053711] RBP: ffffcc49868cb920 R08: 0000000000000000 R09: 0000000000102cc0
[ 2943.053711] R10: 00000000ffffffff R11: 0000000000000000 R12: 00000007fffffff8
[ 2943.053712] R13: 00000007fffffff8 R14: 0000000000000000 R15: 00000000001028c0
[ 2943.053713] FS: 00007fb9ae31b5c0(0000) GS:ffff89c51fe09000(0000)
knlGS:0000000000000000
[ 2943.053713] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2943.053714] CR2: 0000560900ed4ff8 CR3: 000000010e20f000 CR4: 0000000000750ef0
[ 2943.053715] PKRU: 55555554
[ 2943.053715] Call Trace:
[ 2943.053717] <TASK>
[ 2943.053719] ? __radix_tree_lookup+0x47/0xe0
[ 2943.053722] ? vmemdup_user+0x38/0xb0
[ 2943.053725] vmemdup_user+0x38/0xb0
[ 2943.053726] ? vmemdup_user+0x38/0xb0
[ 2943.053728] amdgpu_bo_create_list_entry_array+0x42/0x130 [amdgpu]
[ 2943.053846] ? __pfx_amdgpu_gem_mmap_ioctl+0x10/0x10 [amdgpu]
[ 2943.053947] amdgpu_bo_list_ioctl+0x51/0x300 [amdgpu]
[ 2943.054049] ? __pfx_amdgpu_bo_list_ioctl+0x10/0x10 [amdgpu]
[ 2943.054150] drm_ioctl_kernel+0xab/0x110 [drm]
[ 2943.054175] ? __pfx_amdgpu_bo_list_ioctl+0x10/0x10 [amdgpu]
[ 2943.054277] drm_ioctl+0x2cb/0x5a0 [drm]
[ 2943.054289] amdgpu_drm_ioctl+0x4f/0x90 [amdgpu]
[ 2943.054379] __x64_sys_ioctl+0x9e/0xf0
[ 2943.054382] x64_sys_call+0x1280/0x21b0
[ 2943.054384] do_syscall_64+0x74/0x7b0
[ 2943.054387] ? ktime_get_mono_fast_ns+0x47/0xd0
[ 2943.054389] ? amdgpu_drm_ioctl+0x70/0x90 [amdgpu]
[ 2943.054476] ? __x64_sys_ioctl+0x9e/0xf0
[ 2943.054478] ? x64_sys_call+0x1280/0x21b0
[ 2943.054479] ? do_syscall_64+0xa8/0x7b0
[ 2943.054480] ? __x64_sys_ioctl+0x9e/0xf0
[ 2943.054481] ? x64_sys_call+0x1280/0x21b0
[ 2943.054482] ? do_syscall_64+0xa8/0x7b0
[ 2943.054483] ? amdgpu_drm_ioctl+0x70/0x90 [amdgpu]
[ 2943.054571] ? __x64_sys_ioctl+0x9e/0xf0
[ 2943.054572] ? x64_sys_call+0x1280/0x21b0
[ 2943.054573] ? do_syscall_64+0xa8/0x7b0
[ 2943.054574] ? irqentry_exit+0x42/0x610
[ 2943.054576] ? do_sys_open+0x49/0x80
[ 2943.054578] ? exc_page_fault+0x97/0x190
[ 2943.054579] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 2943.054580] RIP: 0033:0x7fb9b0524ded
[ 2943.054582] Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0
10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d
00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00
[ 2943.054582] RSP: 002b:00007ffdbf092760 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[ 2943.054583] RAX: ffffffffffffffda RBX: 00005608cd126ae0 RCX: 00007fb9b0524ded
[ 2943.054584] RDX: 0000560900eb4110 RSI: 00000000c0186443 RDI: 0000000000000003
[ 2943.054584] RBP: 00007ffdbf0927b0 R08: 0000000000000008 R09: 00005608cd126ac0
[ 2943.054585] R10: 0000560900eb4110 R11: 0000000000000246 R12: 00000000c0186443
[ 2943.054585] R13: 0000000000000003 R14: 0000560900eb4110 R15: 0000000000000018
[ 2943.054587] </TASK>
[ 2943.054587] ---[ end trace 0000000000000000 ]---
>
> Regards,
> Christian.
>
> >
> > Thanks
> > Jesse
> >>
> >> Regards,
> >> Christian.
> >>
> >>>
> >>> Signed-off-by: Jesse.Zhang <[email protected]>
> >>> ---
> >>> drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 8 +++++++-
> >>> 1 file changed, 7 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> >>> index 87ec46c56a6e..efab39ba7f51 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
> >>> @@ -29,6 +29,7 @@
> >>> */
> >>>
> >>> #include <linux/sort.h>
> >>> +#include <linux/overflow.h>
> >>> #include <linux/uaccess.h>
> >>>
> >>> #include "amdgpu.h"
> >>> @@ -187,6 +188,11 @@ int amdgpu_bo_create_list_entry_array(struct
> >> drm_amdgpu_bo_list_in *in,
> >>> const uint32_t bo_info_size = in->bo_info_size;
> >>> const uint32_t bo_number = in->bo_number;
> >>> struct drm_amdgpu_bo_list_entry *info;
> >>> + size_t alloc_size;
> >>> +
> >>> + if (check_mul_overflow((size_t)bo_number, (size_t)info_size,
> >>> + &alloc_size) || alloc_size > INT_MAX)
> >>> + return -EINVAL;
> >>>
> >>> /* copy the handle array from userspace to a kernel buffer */
> >>> if (likely(info_size == bo_info_size)) { @@ -201,7 +207,7 @@ int
> >>> amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
> >>> if (!info)
> >>> return -ENOMEM;
> >>>
> >>> - memset(info, 0, bo_number * info_size);
> >>> + memset(info, 0, alloc_size);
> >>> for (i = 0; i < bo_number; ++i, uptr += bo_info_size) {
> >>> if (copy_from_user(&info[i], uptr, bytes)) {
> >>> kvfree(info);
> >