-
Notifications
You must be signed in to change notification settings - Fork 54.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nanopi2 #330
Open
andrewcollis
wants to merge
22
commits into
torvalds:master
Choose a base branch
from
anatol:nanopi2
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Nanopi2 #330
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Fix commit ae05c95 (rtc: s3c: add s3c_rtc_data structure to use variant data instead of s3c_cpu_type) if device tree isn't enabled and used.
Add config `CFG80211_ALLOW_RECONNECT' and use it to quick fix the compatibility issue between WEXT and bcm4336. ------------[ cut here ]------------ WARNING: CPU: 0 PID: 823 at net/wireless/sme.c:979 ...
t3hknr
pushed a commit
to t3hknr/linux
that referenced
this pull request
Feb 23, 2017
Geminilake can output two pixels per clock, and that affects the maximum scaling factor for its scalers. Take that into account and avoid the following warning: WARNING: CPU: 1 PID: 593 at drivers/gpu/drm/i915/intel_display.c:13223 skl_max_scale.part.129+0x78/0x80 [i915] WARN_ON_ONCE(!crtc_clock || cdclk < crtc_clock) Modules linked in: x86_pkg_temp_thermal i915 coretemp kvm_intel kvm i2c_algo_bit drm_kms_helper irqbypass crct10dif_pclmul prime_numbers crc32_pclmul drm ghash_clmulni_intel shpchp tpm_tis tpm_tis_core tpm nfsd authw CPU: 1 PID: 593 Comm: kworker/u8:3 Tainted: G W 4.10.0-rc8ander+ torvalds#330 Hardware name: Intel Corp. Geminilake/GLK RVP1 DDR4 (05), BIOS GELKRVPA.X64.0035.B33.1702150552 02/15/2017 Workqueue: events_unbound async_run_entry_fn Call Trace: dump_stack+0x86/0xc3 __warn+0xcb/0xf0 warn_slowpath_fmt+0x5f/0x80 skl_max_scale.part.129+0x78/0x80 [i915] intel_check_primary_plane+0xa6/0xc0 [i915] intel_plane_atomic_check_with_state+0xd1/0x1a0 [i915] ? drm_printk+0xb5/0xc0 [drm] intel_plane_atomic_check+0x3d/0x80 [i915] drm_atomic_helper_check_planes+0x7c/0x200 [drm_kms_helper] intel_atomic_check+0xa5b/0x11a0 [i915] drm_atomic_check_only+0x353/0x600 [drm] ? drm_atomic_add_affected_connectors+0x10c/0x120 [drm] drm_atomic_commit+0x18/0x50 [drm] restore_fbdev_mode+0x14c/0x2a0 [drm_kms_helper] drm_fb_helper_restore_fbdev_mode_unlocked+0x34/0x80 [drm_kms_helper] drm_fb_helper_set_par+0x2d/0x60 [drm_kms_helper] intel_fbdev_set_par+0x1a/0x70 [i915] fbcon_init+0x582/0x610 visual_init+0xd6/0x130 do_bind_con_driver+0x1da/0x3c0 do_take_over_console+0x116/0x180 do_fbcon_takeover+0x5c/0xb0 fbcon_event_notify+0x772/0x8a0 ? __blocking_notifier_call_chain+0x35/0x70 notifier_call_chain+0x4a/0x70 __blocking_notifier_call_chain+0x4d/0x70 blocking_notifier_call_chain+0x16/0x20 fb_notifier_call_chain+0x1b/0x20 register_framebuffer+0x278/0x360 drm_fb_helper_initial_config+0x253/0x440 [drm_kms_helper] intel_fbdev_initial_config+0x18/0x30 [i915] async_run_entry_fn+0x39/0x170 process_one_work+0x212/0x670 ? process_one_work+0x197/0x670 worker_thread+0x4e/0x490 kthread+0x101/0x140 ? process_one_work+0x670/0x670 ? kthread_create_on_node+0x60/0x60 ret_from_fork+0x31/0x40 v2: s/max_pixclk/max_dotclk/ (Ville) Cc: Rodrigo Vivi <[email protected]> Signed-off-by: Ander Conselvan de Oliveira <[email protected]> Reviewed-by: Ville Syrjälä <[email protected]> Link: http://patchwork.freedesktop.org/patch/msgid/[email protected]
l1k
pushed a commit
to RevolutionPi/linux
that referenced
this pull request
Apr 4, 2017
With completion using swait and so rawlocks we don't need this anymore. Further, bisect thinks this patch is responsible for: |BUG: unable to handle kernel NULL pointer dereference at (null) |IP: [<ffffffff81082123>] sched_cpu_active+0x53/0x70 |PGD 0 |Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC |Dumping ftrace buffer: | (ftrace buffer empty) |Modules linked in: |CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.4.1+ torvalds#330 |Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS Debian-1.8.2-1 04/01/2014 |task: ffff88013ae64b00 ti: ffff88013ae74000 task.ti: ffff88013ae74000 |RIP: 0010:[<ffffffff81082123>] [<ffffffff81082123>] sched_cpu_active+0x53/0x70 |RSP: 0000:ffff88013ae77eb8 EFLAGS: 00010082 |RAX: 0000000000000001 RBX: ffffffff81c2cf20 RCX: 0000001050fb52fb |RDX: 0000001050fb52fb RSI: 000000105117ca1e RDI: 00000000001c7723 |RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001 |R10: 0000000000000000 R11: 0000000000000001 R12: 00000000ffffffff |R13: ffffffff81c2cee0 R14: 0000000000000000 R15: 0000000000000001 |FS: 0000000000000000(0000) GS:ffff88013b200000(0000) knlGS:0000000000000000 |CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b |CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0 |Stack: | ffffffff810c446d ffff88013ae77f00 ffffffff8107d8dd 000000000000000a | 0000000000000001 0000000000000000 0000000000000000 0000000000000000 | 0000000000000000 ffff88013ae77f10 ffffffff8107d90e ffff88013ae77f20 |Call Trace: | [<ffffffff810c446d>] ? debug_lockdep_rcu_enabled+0x1d/0x20 | [<ffffffff8107d8dd>] ? notifier_call_chain+0x5d/0x80 | [<ffffffff8107d90e>] ? __raw_notifier_call_chain+0xe/0x10 | [<ffffffff810598a3>] ? cpu_notify+0x23/0x40 | [<ffffffff8105a7b8>] ? notify_cpu_starting+0x28/0x30 during hotplug. The rawlocks need to remain however. Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
fengguang
pushed a commit
to 0day-ci/linux
that referenced
this pull request
Jul 16, 2018
WARNING: please, no spaces at the start of a line torvalds#250: FILE: kernel/cgroup/cgroup.c:4554: + {$ ERROR: code indent should use tabs where possible torvalds#251: FILE: kernel/cgroup/cgroup.c:4555: + .name = "cpu.pressure",$ WARNING: please, no spaces at the start of a line torvalds#251: FILE: kernel/cgroup/cgroup.c:4555: + .name = "cpu.pressure",$ ERROR: code indent should use tabs where possible torvalds#252: FILE: kernel/cgroup/cgroup.c:4556: + .flags = CFTYPE_NOT_ON_ROOT,$ WARNING: please, no spaces at the start of a line torvalds#252: FILE: kernel/cgroup/cgroup.c:4556: + .flags = CFTYPE_NOT_ON_ROOT,$ ERROR: code indent should use tabs where possible torvalds#253: FILE: kernel/cgroup/cgroup.c:4557: + .seq_show = cgroup_cpu_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#253: FILE: kernel/cgroup/cgroup.c:4557: + .seq_show = cgroup_cpu_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#254: FILE: kernel/cgroup/cgroup.c:4558: + },$ WARNING: please, no spaces at the start of a line torvalds#255: FILE: kernel/cgroup/cgroup.c:4559: + {$ ERROR: code indent should use tabs where possible torvalds#256: FILE: kernel/cgroup/cgroup.c:4560: + .name = "memory.pressure",$ WARNING: please, no spaces at the start of a line torvalds#256: FILE: kernel/cgroup/cgroup.c:4560: + .name = "memory.pressure",$ ERROR: code indent should use tabs where possible torvalds#257: FILE: kernel/cgroup/cgroup.c:4561: + .flags = CFTYPE_NOT_ON_ROOT,$ WARNING: please, no spaces at the start of a line torvalds#257: FILE: kernel/cgroup/cgroup.c:4561: + .flags = CFTYPE_NOT_ON_ROOT,$ ERROR: code indent should use tabs where possible torvalds#258: FILE: kernel/cgroup/cgroup.c:4562: + .seq_show = cgroup_memory_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#258: FILE: kernel/cgroup/cgroup.c:4562: + .seq_show = cgroup_memory_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#259: FILE: kernel/cgroup/cgroup.c:4563: + },$ WARNING: please, no spaces at the start of a line torvalds#260: FILE: kernel/cgroup/cgroup.c:4564: + {$ ERROR: code indent should use tabs where possible torvalds#261: FILE: kernel/cgroup/cgroup.c:4565: + .name = "io.pressure",$ WARNING: please, no spaces at the start of a line torvalds#261: FILE: kernel/cgroup/cgroup.c:4565: + .name = "io.pressure",$ ERROR: code indent should use tabs where possible torvalds#262: FILE: kernel/cgroup/cgroup.c:4566: + .flags = CFTYPE_NOT_ON_ROOT,$ WARNING: please, no spaces at the start of a line torvalds#262: FILE: kernel/cgroup/cgroup.c:4566: + .flags = CFTYPE_NOT_ON_ROOT,$ ERROR: code indent should use tabs where possible torvalds#263: FILE: kernel/cgroup/cgroup.c:4567: + .seq_show = cgroup_io_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#263: FILE: kernel/cgroup/cgroup.c:4567: + .seq_show = cgroup_io_pressure_show,$ WARNING: please, no spaces at the start of a line torvalds#264: FILE: kernel/cgroup/cgroup.c:4568: + },$ WARNING: please, no spaces at the start of a line torvalds#322: FILE: kernel/sched/psi.c:424: + cgroup = task->cgroups->dfl_cgrp;$ WARNING: please, no spaces at the start of a line torvalds#323: FILE: kernel/sched/psi.c:425: + while (cgroup && (parent = cgroup_parent(cgroup))) {$ WARNING: suspect code indent for conditional statements (7, 15) torvalds#323: FILE: kernel/sched/psi.c:425: + while (cgroup && (parent = cgroup_parent(cgroup))) { + struct psi_group *group; ERROR: code indent should use tabs where possible torvalds#324: FILE: kernel/sched/psi.c:426: + struct psi_group *group;$ WARNING: please, no spaces at the start of a line torvalds#324: FILE: kernel/sched/psi.c:426: + struct psi_group *group;$ ERROR: code indent should use tabs where possible torvalds#326: FILE: kernel/sched/psi.c:428: + group = cgroup_psi(cgroup);$ WARNING: please, no spaces at the start of a line torvalds#326: FILE: kernel/sched/psi.c:428: + group = cgroup_psi(cgroup);$ ERROR: code indent should use tabs where possible torvalds#327: FILE: kernel/sched/psi.c:429: + psi_group_change(group, cpu, now, clear, set);$ WARNING: please, no spaces at the start of a line torvalds#327: FILE: kernel/sched/psi.c:429: + psi_group_change(group, cpu, now, clear, set);$ ERROR: code indent should use tabs where possible torvalds#329: FILE: kernel/sched/psi.c:431: + cgroup = parent;$ WARNING: please, no spaces at the start of a line torvalds#329: FILE: kernel/sched/psi.c:431: + cgroup = parent;$ WARNING: please, no spaces at the start of a line torvalds#330: FILE: kernel/sched/psi.c:432: + }$ WARNING: braces {} are not necessary for any arm of this statement torvalds#378: FILE: kernel/sched/psi.c:537: + if (task_on_rq_queued(task)) { [...] + } else if (task->in_iowait) { [...] total: 13 errors, 24 warnings, 334 lines checked NOTE: For some of the reported defects, checkpatch may be able to mechanically convert to the typical style using --fix or --fix-inplace. NOTE: Whitespace errors detected. You may wish to use scripts/cleanpatch or scripts/cleanfile ./patches/psi-cgroup-support.patch has style problems, please review. NOTE: If any of the errors are false positives, please report them to the maintainer, see CHECKPATCH in MAINTAINERS. Please run checkpatch prior to sending patches Cc: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]>
fengguang
pushed a commit
to 0day-ci/linux
that referenced
this pull request
Mar 15, 2021
This commit fixes the following checkpatch.pl errors: ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#285: FILE: ./hal/odm.c:285: +void odm_CommonInfoSelfInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#287: FILE: ./hal/odm.c:287: +void odm_CommonInfoSelfUpdate(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#289: FILE: ./hal/odm.c:289: +void odm_CmnInfoInit_Debug(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#291: FILE: ./hal/odm.c:291: +void odm_BasicDbgMessage(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#305: FILE: ./hal/odm.c:305: +void odm_RefreshRateAdaptiveMaskCE(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#309: FILE: ./hal/odm.c:309: +void odm_RSSIMonitorInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#311: FILE: ./hal/odm.c:311: +void odm_RSSIMonitorCheckCE(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#313: FILE: ./hal/odm.c:313: +void odm_RSSIMonitorCheck(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#315: FILE: ./hal/odm.c:315: +void odm_SwAntDetectInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#323: FILE: ./hal/odm.c:323: +void odm_RefreshRateAdaptiveMask(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#325: FILE: ./hal/odm.c:325: +void ODM_TXPowerTrackingCheck(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#327: FILE: ./hal/odm.c:327: +void odm_RateAdaptiveMaskInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#330: FILE: ./hal/odm.c:330: +void odm_TXPowerTrackingInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#338: FILE: ./hal/odm.c:338: +void odm_InitHybridAntDiv(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#341: FILE: ./hal/odm.c:341: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#349: FILE: ./hal/odm.c:349: +void odm_SetRxIdleAnt(struct DM_ODM_T * pDM_Odm, u8 Ant, bool bDualPath); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#353: FILE: ./hal/odm.c:353: +void odm_HwAntDiv(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#363: FILE: ./hal/odm.c:363: +void ODM_DMInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#393: FILE: ./hal/odm.c:393: +void ODM_DMWatchdog(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#420: FILE: ./hal/odm.c:420: + struct DIG_T * pDM_DigTable = &pDM_Odm->DM_DigTable; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#448: FILE: ./hal/odm.c:448: +void ODM_CmnInfoInit(struct DM_ODM_T * pDM_Odm, enum ODM_CMNINFO_E CmnInfo, u32 Value) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#560: FILE: ./hal/odm.c:560: +void ODM_CmnInfoHook(struct DM_ODM_T * pDM_Odm, enum ODM_CMNINFO_E CmnInfo, void *pValue) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#689: FILE: ./hal/odm.c:689: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#717: FILE: ./hal/odm.c:717: +void ODM_CmnInfoUpdate(struct DM_ODM_T * pDM_Odm, u32 CmnInfo, u64 Value) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#831: FILE: ./hal/odm.c:831: +void odm_CommonInfoSelfInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#841: FILE: ./hal/odm.c:841: +void odm_CommonInfoSelfUpdate(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#867: FILE: ./hal/odm.c:867: +void odm_CmnInfoInit_Debug(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#888: FILE: ./hal/odm.c:888: +void odm_BasicDbgMessage(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#935: FILE: ./hal/odm.c:935: +void odm_RateAdaptiveMaskInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#937: FILE: ./hal/odm.c:937: + struct ODM_RATE_ADAPTIVE * pOdmRA = &pDM_Odm->RateAdaptive; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#953: FILE: ./hal/odm.c:953: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#1083: FILE: ./hal/odm.c:1083: +void odm_RefreshRateAdaptiveMask(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#1094: FILE: ./hal/odm.c:1094: +void odm_RefreshRateAdaptiveMaskCE(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1131: FILE: ./hal/odm.c:1131: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1137: FILE: ./hal/odm.c:1137: + struct ODM_RATE_ADAPTIVE * pRA = &pDM_Odm->RateAdaptive; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1196: FILE: ./hal/odm.c:1196: +void odm_RSSIMonitorInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1198: FILE: ./hal/odm.c:1198: + struct RA_T * pRA_Table = &pDM_Odm->DM_RA_Table; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1204: FILE: ./hal/odm.c:1204: +void odm_RSSIMonitorCheck(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1217: FILE: ./hal/odm.c:1217: + struct DM_ODM_T * pDM_Odm = &(pHalData->odmpriv); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1234: FILE: ./hal/odm.c:1234: +void odm_RSSIMonitorCheckCE(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1243: FILE: ./hal/odm.c:1243: + struct RA_T * pRA_Table = &pDM_Odm->DM_RA_Table; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1306: FILE: ./hal/odm.c:1306: +static u8 getSwingIndex(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1330: FILE: ./hal/odm.c:1330: +void odm_TXPowerTrackingInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1374: FILE: ./hal/odm.c:1374: +void ODM_TXPowerTrackingCheck(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1398: FILE: ./hal/odm.c:1398: +void odm_SwAntDetectInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1400: FILE: ./hal/odm.c:1400: + struct SWAT_T * pDM_SWAT_Table = &pDM_Odm->DM_SWAT_Table; Signed-off-by: Marco Cesati <[email protected]>
fengguang
pushed a commit
to 0day-ci/linux
that referenced
this pull request
Mar 16, 2021
This commit fixes the following checkpatch.pl errors: ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#285: FILE: ./hal/odm.c:285: +void odm_CommonInfoSelfInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#287: FILE: ./hal/odm.c:287: +void odm_CommonInfoSelfUpdate(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#289: FILE: ./hal/odm.c:289: +void odm_CmnInfoInit_Debug(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#291: FILE: ./hal/odm.c:291: +void odm_BasicDbgMessage(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#305: FILE: ./hal/odm.c:305: +void odm_RefreshRateAdaptiveMaskCE(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#309: FILE: ./hal/odm.c:309: +void odm_RSSIMonitorInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#311: FILE: ./hal/odm.c:311: +void odm_RSSIMonitorCheckCE(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#313: FILE: ./hal/odm.c:313: +void odm_RSSIMonitorCheck(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#315: FILE: ./hal/odm.c:315: +void odm_SwAntDetectInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#323: FILE: ./hal/odm.c:323: +void odm_RefreshRateAdaptiveMask(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#325: FILE: ./hal/odm.c:325: +void ODM_TXPowerTrackingCheck(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#327: FILE: ./hal/odm.c:327: +void odm_RateAdaptiveMaskInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#330: FILE: ./hal/odm.c:330: +void odm_TXPowerTrackingInit(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#338: FILE: ./hal/odm.c:338: +void odm_InitHybridAntDiv(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#341: FILE: ./hal/odm.c:341: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#349: FILE: ./hal/odm.c:349: +void odm_SetRxIdleAnt(struct DM_ODM_T * pDM_Odm, u8 Ant, bool bDualPath); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#353: FILE: ./hal/odm.c:353: +void odm_HwAntDiv(struct DM_ODM_T * pDM_Odm); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#363: FILE: ./hal/odm.c:363: +void ODM_DMInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#393: FILE: ./hal/odm.c:393: +void ODM_DMWatchdog(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#420: FILE: ./hal/odm.c:420: + struct DIG_T * pDM_DigTable = &pDM_Odm->DM_DigTable; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#448: FILE: ./hal/odm.c:448: +void ODM_CmnInfoInit(struct DM_ODM_T * pDM_Odm, enum ODM_CMNINFO_E CmnInfo, u32 Value) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#560: FILE: ./hal/odm.c:560: +void ODM_CmnInfoHook(struct DM_ODM_T * pDM_Odm, enum ODM_CMNINFO_E CmnInfo, void *pValue) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#689: FILE: ./hal/odm.c:689: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#717: FILE: ./hal/odm.c:717: +void ODM_CmnInfoUpdate(struct DM_ODM_T * pDM_Odm, u32 CmnInfo, u64 Value) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#831: FILE: ./hal/odm.c:831: +void odm_CommonInfoSelfInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#841: FILE: ./hal/odm.c:841: +void odm_CommonInfoSelfUpdate(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#867: FILE: ./hal/odm.c:867: +void odm_CmnInfoInit_Debug(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#888: FILE: ./hal/odm.c:888: +void odm_BasicDbgMessage(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#935: FILE: ./hal/odm.c:935: +void odm_RateAdaptiveMaskInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#937: FILE: ./hal/odm.c:937: + struct ODM_RATE_ADAPTIVE * pOdmRA = &pDM_Odm->RateAdaptive; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#953: FILE: ./hal/odm.c:953: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#1083: FILE: ./hal/odm.c:1083: +void odm_RefreshRateAdaptiveMask(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" torvalds#1094: FILE: ./hal/odm.c:1094: +void odm_RefreshRateAdaptiveMaskCE(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1131: FILE: ./hal/odm.c:1131: + struct DM_ODM_T * pDM_Odm, ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1137: FILE: ./hal/odm.c:1137: + struct ODM_RATE_ADAPTIVE * pRA = &pDM_Odm->RateAdaptive; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1196: FILE: ./hal/odm.c:1196: +void odm_RSSIMonitorInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1198: FILE: ./hal/odm.c:1198: + struct RA_T * pRA_Table = &pDM_Odm->DM_RA_Table; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1204: FILE: ./hal/odm.c:1204: +void odm_RSSIMonitorCheck(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1217: FILE: ./hal/odm.c:1217: + struct DM_ODM_T * pDM_Odm = &(pHalData->odmpriv); ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1234: FILE: ./hal/odm.c:1234: +void odm_RSSIMonitorCheckCE(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1243: FILE: ./hal/odm.c:1243: + struct RA_T * pRA_Table = &pDM_Odm->DM_RA_Table; ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1306: FILE: ./hal/odm.c:1306: +static u8 getSwingIndex(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1330: FILE: ./hal/odm.c:1330: +void odm_TXPowerTrackingInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1374: FILE: ./hal/odm.c:1374: +void ODM_TXPowerTrackingCheck(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1398: FILE: ./hal/odm.c:1398: +void odm_SwAntDetectInit(struct DM_ODM_T * pDM_Odm) ERROR:POINTER_LOCATION: "foo * bar" should be "foo *bar" #1400: FILE: ./hal/odm.c:1400: + struct SWAT_T * pDM_SWAT_Table = &pDM_Odm->DM_SWAT_Table; Reviewed-by: Dan Carpenter <[email protected]> Signed-off-by: Marco Cesati <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
ojeda
added a commit
to ojeda/linux
that referenced
this pull request
Jun 3, 2021
rust: helpers: Clarify comment on size_t = uintptr_t guard
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Oct 27, 2023
Enable cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap torvalds#316/1 verifier_bswap/BSWAP, 16:OK torvalds#316/2 verifier_bswap/BSWAP, 16 @unpriv:OK torvalds#316/3 verifier_bswap/BSWAP, 32:OK torvalds#316/4 verifier_bswap/BSWAP, 32 @unpriv:OK torvalds#316/5 verifier_bswap/BSWAP, 64:OK torvalds#316/6 verifier_bswap/BSWAP, 64 @unpriv:OK torvalds#316 verifier_bswap:OK torvalds#330/1 verifier_gotol/gotol, small_imm:OK torvalds#330/2 verifier_gotol/gotol, small_imm @unpriv:OK torvalds#330 verifier_gotol:OK torvalds#338/1 verifier_ldsx/LDSX, S8:OK torvalds#338/2 verifier_ldsx/LDSX, S8 @unpriv:OK torvalds#338/3 verifier_ldsx/LDSX, S16:OK torvalds#338/4 verifier_ldsx/LDSX, S16 @unpriv:OK torvalds#338/5 verifier_ldsx/LDSX, S32:OK torvalds#338/6 verifier_ldsx/LDSX, S32 @unpriv:OK torvalds#338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK torvalds#338/8 verifier_ldsx/LDSX, S16 range checking:OK torvalds#338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK torvalds#338/10 verifier_ldsx/LDSX, S32 range checking:OK torvalds#338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK torvalds#338 verifier_ldsx:OK torvalds#349/1 verifier_movsx/MOV32SX, S8:OK torvalds#349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK torvalds#349/3 verifier_movsx/MOV32SX, S16:OK torvalds#349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK torvalds#349/5 verifier_movsx/MOV64SX, S8:OK torvalds#349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK torvalds#349/7 verifier_movsx/MOV64SX, S16:OK torvalds#349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK torvalds#349/9 verifier_movsx/MOV64SX, S32:OK torvalds#349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK torvalds#349/11 verifier_movsx/MOV32SX, S8, range_check:OK torvalds#349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK torvalds#349/13 verifier_movsx/MOV32SX, S16, range_check:OK torvalds#349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK torvalds#349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK torvalds#349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK torvalds#349/17 verifier_movsx/MOV64SX, S8, range_check:OK torvalds#349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK torvalds#349/19 verifier_movsx/MOV64SX, S16, range_check:OK torvalds#349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK torvalds#349/21 verifier_movsx/MOV64SX, S32, range_check:OK torvalds#349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK torvalds#349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK torvalds#349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK torvalds#349 verifier_movsx:OK torvalds#361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK torvalds#361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK torvalds#361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK torvalds#361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK torvalds#361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK torvalds#361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK torvalds#361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK torvalds#361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK torvalds#361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK torvalds#361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK torvalds#361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK torvalds#361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK torvalds#361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK torvalds#361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK torvalds#361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK torvalds#361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK torvalds#361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK torvalds#361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK torvalds#361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK torvalds#361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK torvalds#361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK torvalds#361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK torvalds#361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK torvalds#361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK torvalds#361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK torvalds#361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK torvalds#361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK torvalds#361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK torvalds#361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK torvalds#361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK torvalds#361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK torvalds#361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK torvalds#361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK torvalds#361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK torvalds#361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK torvalds#361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK torvalds#361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK torvalds#361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK torvalds#361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK torvalds#361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK torvalds#361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK torvalds#361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK torvalds#361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK torvalds#361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK torvalds#361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK torvalds#361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK torvalds#361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK torvalds#361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK torvalds#361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK torvalds#361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK torvalds#361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK torvalds#361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK torvalds#361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK torvalds#361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK torvalds#361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK torvalds#361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK torvalds#361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/113 verifier_sdiv/SDIV32, zero divisor:OK torvalds#361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK torvalds#361/115 verifier_sdiv/SDIV64, zero divisor:OK torvalds#361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK torvalds#361/117 verifier_sdiv/SMOD32, zero divisor:OK torvalds#361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK torvalds#361/119 verifier_sdiv/SMOD64, zero divisor:OK torvalds#361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK torvalds#361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116/2 ldsx_insn/ctx_member_sign_ext:OK torvalds#116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK torvalds#116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Nov 9, 2023
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap torvalds#316/1 verifier_bswap/BSWAP, 16:OK torvalds#316/2 verifier_bswap/BSWAP, 16 @unpriv:OK torvalds#316/3 verifier_bswap/BSWAP, 32:OK torvalds#316/4 verifier_bswap/BSWAP, 32 @unpriv:OK torvalds#316/5 verifier_bswap/BSWAP, 64:OK torvalds#316/6 verifier_bswap/BSWAP, 64 @unpriv:OK torvalds#316 verifier_bswap:OK torvalds#330/1 verifier_gotol/gotol, small_imm:OK torvalds#330/2 verifier_gotol/gotol, small_imm @unpriv:OK torvalds#330 verifier_gotol:OK torvalds#338/1 verifier_ldsx/LDSX, S8:OK torvalds#338/2 verifier_ldsx/LDSX, S8 @unpriv:OK torvalds#338/3 verifier_ldsx/LDSX, S16:OK torvalds#338/4 verifier_ldsx/LDSX, S16 @unpriv:OK torvalds#338/5 verifier_ldsx/LDSX, S32:OK torvalds#338/6 verifier_ldsx/LDSX, S32 @unpriv:OK torvalds#338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK torvalds#338/8 verifier_ldsx/LDSX, S16 range checking:OK torvalds#338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK torvalds#338/10 verifier_ldsx/LDSX, S32 range checking:OK torvalds#338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK torvalds#338 verifier_ldsx:OK torvalds#349/1 verifier_movsx/MOV32SX, S8:OK torvalds#349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK torvalds#349/3 verifier_movsx/MOV32SX, S16:OK torvalds#349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK torvalds#349/5 verifier_movsx/MOV64SX, S8:OK torvalds#349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK torvalds#349/7 verifier_movsx/MOV64SX, S16:OK torvalds#349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK torvalds#349/9 verifier_movsx/MOV64SX, S32:OK torvalds#349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK torvalds#349/11 verifier_movsx/MOV32SX, S8, range_check:OK torvalds#349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK torvalds#349/13 verifier_movsx/MOV32SX, S16, range_check:OK torvalds#349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK torvalds#349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK torvalds#349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK torvalds#349/17 verifier_movsx/MOV64SX, S8, range_check:OK torvalds#349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK torvalds#349/19 verifier_movsx/MOV64SX, S16, range_check:OK torvalds#349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK torvalds#349/21 verifier_movsx/MOV64SX, S32, range_check:OK torvalds#349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK torvalds#349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK torvalds#349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK torvalds#349 verifier_movsx:OK torvalds#361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK torvalds#361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK torvalds#361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK torvalds#361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK torvalds#361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK torvalds#361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK torvalds#361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK torvalds#361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK torvalds#361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK torvalds#361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK torvalds#361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK torvalds#361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK torvalds#361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK torvalds#361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK torvalds#361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK torvalds#361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK torvalds#361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK torvalds#361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK torvalds#361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK torvalds#361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK torvalds#361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK torvalds#361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK torvalds#361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK torvalds#361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK torvalds#361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK torvalds#361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK torvalds#361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK torvalds#361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK torvalds#361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK torvalds#361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK torvalds#361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK torvalds#361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK torvalds#361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK torvalds#361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK torvalds#361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK torvalds#361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK torvalds#361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK torvalds#361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK torvalds#361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK torvalds#361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK torvalds#361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK torvalds#361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK torvalds#361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK torvalds#361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK torvalds#361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK torvalds#361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK torvalds#361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK torvalds#361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK torvalds#361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK torvalds#361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK torvalds#361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK torvalds#361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK torvalds#361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK torvalds#361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK torvalds#361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK torvalds#361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK torvalds#361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/113 verifier_sdiv/SDIV32, zero divisor:OK torvalds#361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK torvalds#361/115 verifier_sdiv/SDIV64, zero divisor:OK torvalds#361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK torvalds#361/117 verifier_sdiv/SMOD32, zero divisor:OK torvalds#361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK torvalds#361/119 verifier_sdiv/SMOD64, zero divisor:OK torvalds#361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK torvalds#361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116/2 ldsx_insn/ctx_member_sign_ext:OK torvalds#116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK torvalds#116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
yetist
pushed a commit
to loongarchlinux/linux
that referenced
this pull request
Nov 13, 2023
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap torvalds#316/1 verifier_bswap/BSWAP, 16:OK torvalds#316/2 verifier_bswap/BSWAP, 16 @unpriv:OK torvalds#316/3 verifier_bswap/BSWAP, 32:OK torvalds#316/4 verifier_bswap/BSWAP, 32 @unpriv:OK torvalds#316/5 verifier_bswap/BSWAP, 64:OK torvalds#316/6 verifier_bswap/BSWAP, 64 @unpriv:OK torvalds#316 verifier_bswap:OK torvalds#330/1 verifier_gotol/gotol, small_imm:OK torvalds#330/2 verifier_gotol/gotol, small_imm @unpriv:OK torvalds#330 verifier_gotol:OK torvalds#338/1 verifier_ldsx/LDSX, S8:OK torvalds#338/2 verifier_ldsx/LDSX, S8 @unpriv:OK torvalds#338/3 verifier_ldsx/LDSX, S16:OK torvalds#338/4 verifier_ldsx/LDSX, S16 @unpriv:OK torvalds#338/5 verifier_ldsx/LDSX, S32:OK torvalds#338/6 verifier_ldsx/LDSX, S32 @unpriv:OK torvalds#338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK torvalds#338/8 verifier_ldsx/LDSX, S16 range checking:OK torvalds#338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK torvalds#338/10 verifier_ldsx/LDSX, S32 range checking:OK torvalds#338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK torvalds#338 verifier_ldsx:OK torvalds#349/1 verifier_movsx/MOV32SX, S8:OK torvalds#349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK torvalds#349/3 verifier_movsx/MOV32SX, S16:OK torvalds#349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK torvalds#349/5 verifier_movsx/MOV64SX, S8:OK torvalds#349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK torvalds#349/7 verifier_movsx/MOV64SX, S16:OK torvalds#349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK torvalds#349/9 verifier_movsx/MOV64SX, S32:OK torvalds#349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK torvalds#349/11 verifier_movsx/MOV32SX, S8, range_check:OK torvalds#349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK torvalds#349/13 verifier_movsx/MOV32SX, S16, range_check:OK torvalds#349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK torvalds#349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK torvalds#349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK torvalds#349/17 verifier_movsx/MOV64SX, S8, range_check:OK torvalds#349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK torvalds#349/19 verifier_movsx/MOV64SX, S16, range_check:OK torvalds#349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK torvalds#349/21 verifier_movsx/MOV64SX, S32, range_check:OK torvalds#349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK torvalds#349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK torvalds#349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK torvalds#349 verifier_movsx:OK torvalds#361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK torvalds#361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK torvalds#361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK torvalds#361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK torvalds#361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK torvalds#361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK torvalds#361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK torvalds#361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK torvalds#361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK torvalds#361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK torvalds#361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK torvalds#361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK torvalds#361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK torvalds#361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK torvalds#361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK torvalds#361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK torvalds#361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK torvalds#361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK torvalds#361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK torvalds#361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK torvalds#361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK torvalds#361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK torvalds#361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK torvalds#361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK torvalds#361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK torvalds#361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK torvalds#361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK torvalds#361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK torvalds#361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK torvalds#361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK torvalds#361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK torvalds#361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK torvalds#361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK torvalds#361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK torvalds#361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK torvalds#361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK torvalds#361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK torvalds#361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK torvalds#361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK torvalds#361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK torvalds#361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK torvalds#361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK torvalds#361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK torvalds#361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK torvalds#361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK torvalds#361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK torvalds#361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK torvalds#361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK torvalds#361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK torvalds#361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK torvalds#361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK torvalds#361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK torvalds#361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK torvalds#361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK torvalds#361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK torvalds#361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK torvalds#361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/113 verifier_sdiv/SDIV32, zero divisor:OK torvalds#361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK torvalds#361/115 verifier_sdiv/SDIV64, zero divisor:OK torvalds#361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK torvalds#361/117 verifier_sdiv/SMOD32, zero divisor:OK torvalds#361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK torvalds#361/119 verifier_sdiv/SMOD64, zero divisor:OK torvalds#361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK torvalds#361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116/2 ldsx_insn/ctx_member_sign_ext:OK torvalds#116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK torvalds#116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
RevySR
pushed a commit
to RevySR/linux
that referenced
this pull request
Nov 23, 2023
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap torvalds#316/1 verifier_bswap/BSWAP, 16:OK torvalds#316/2 verifier_bswap/BSWAP, 16 @unpriv:OK torvalds#316/3 verifier_bswap/BSWAP, 32:OK torvalds#316/4 verifier_bswap/BSWAP, 32 @unpriv:OK torvalds#316/5 verifier_bswap/BSWAP, 64:OK torvalds#316/6 verifier_bswap/BSWAP, 64 @unpriv:OK torvalds#316 verifier_bswap:OK torvalds#330/1 verifier_gotol/gotol, small_imm:OK torvalds#330/2 verifier_gotol/gotol, small_imm @unpriv:OK torvalds#330 verifier_gotol:OK torvalds#338/1 verifier_ldsx/LDSX, S8:OK torvalds#338/2 verifier_ldsx/LDSX, S8 @unpriv:OK torvalds#338/3 verifier_ldsx/LDSX, S16:OK torvalds#338/4 verifier_ldsx/LDSX, S16 @unpriv:OK torvalds#338/5 verifier_ldsx/LDSX, S32:OK torvalds#338/6 verifier_ldsx/LDSX, S32 @unpriv:OK torvalds#338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK torvalds#338/8 verifier_ldsx/LDSX, S16 range checking:OK torvalds#338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK torvalds#338/10 verifier_ldsx/LDSX, S32 range checking:OK torvalds#338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK torvalds#338 verifier_ldsx:OK torvalds#349/1 verifier_movsx/MOV32SX, S8:OK torvalds#349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK torvalds#349/3 verifier_movsx/MOV32SX, S16:OK torvalds#349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK torvalds#349/5 verifier_movsx/MOV64SX, S8:OK torvalds#349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK torvalds#349/7 verifier_movsx/MOV64SX, S16:OK torvalds#349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK torvalds#349/9 verifier_movsx/MOV64SX, S32:OK torvalds#349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK torvalds#349/11 verifier_movsx/MOV32SX, S8, range_check:OK torvalds#349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK torvalds#349/13 verifier_movsx/MOV32SX, S16, range_check:OK torvalds#349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK torvalds#349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK torvalds#349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK torvalds#349/17 verifier_movsx/MOV64SX, S8, range_check:OK torvalds#349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK torvalds#349/19 verifier_movsx/MOV64SX, S16, range_check:OK torvalds#349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK torvalds#349/21 verifier_movsx/MOV64SX, S32, range_check:OK torvalds#349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK torvalds#349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK torvalds#349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK torvalds#349 verifier_movsx:OK torvalds#361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK torvalds#361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK torvalds#361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK torvalds#361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK torvalds#361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK torvalds#361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK torvalds#361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK torvalds#361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK torvalds#361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK torvalds#361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK torvalds#361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK torvalds#361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK torvalds#361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK torvalds#361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK torvalds#361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK torvalds#361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK torvalds#361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK torvalds#361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK torvalds#361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK torvalds#361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK torvalds#361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK torvalds#361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK torvalds#361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK torvalds#361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK torvalds#361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK torvalds#361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK torvalds#361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK torvalds#361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK torvalds#361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK torvalds#361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK torvalds#361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK torvalds#361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK torvalds#361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK torvalds#361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK torvalds#361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK torvalds#361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK torvalds#361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK torvalds#361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK torvalds#361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK torvalds#361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK torvalds#361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK torvalds#361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK torvalds#361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK torvalds#361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK torvalds#361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK torvalds#361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK torvalds#361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK torvalds#361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK torvalds#361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK torvalds#361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK torvalds#361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK torvalds#361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK torvalds#361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK torvalds#361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK torvalds#361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK torvalds#361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK torvalds#361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/113 verifier_sdiv/SDIV32, zero divisor:OK torvalds#361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK torvalds#361/115 verifier_sdiv/SDIV64, zero divisor:OK torvalds#361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK torvalds#361/117 verifier_sdiv/SMOD32, zero divisor:OK torvalds#361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK torvalds#361/119 verifier_sdiv/SMOD64, zero divisor:OK torvalds#361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK torvalds#361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116/2 ldsx_insn/ctx_member_sign_ext:OK torvalds#116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK torvalds#116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
sean-jc
added a commit
to sean-jc/linux
that referenced
this pull request
Jun 7, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w sempahores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(), but actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Signed-off-by: Sean Christopherson <[email protected]>
sean-jc
added a commit
to sean-jc/linux
that referenced
this pull request
Jun 7, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(), but actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Signed-off-by: Sean Christopherson <[email protected]>
sean-jc
added a commit
to sean-jc/linux
that referenced
this pull request
Jun 8, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(), but actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Signed-off-by: Sean Christopherson <[email protected]>
shipujin
pushed a commit
to shipujin/linux
that referenced
this pull request
Jul 24, 2024
Enable the cpu v4 tests for LoongArch. Currently, we don't have BPF trampoline in LoongArch JIT, so the fentry test `test_ptr_struct_arg` still failed, will followup. Test result attached below: # ./test_progs -t verifier_sdiv,verifier_movsx,verifier_ldsx,verifier_gotol,verifier_bswap torvalds#316/1 verifier_bswap/BSWAP, 16:OK torvalds#316/2 verifier_bswap/BSWAP, 16 @unpriv:OK torvalds#316/3 verifier_bswap/BSWAP, 32:OK torvalds#316/4 verifier_bswap/BSWAP, 32 @unpriv:OK torvalds#316/5 verifier_bswap/BSWAP, 64:OK torvalds#316/6 verifier_bswap/BSWAP, 64 @unpriv:OK torvalds#316 verifier_bswap:OK torvalds#330/1 verifier_gotol/gotol, small_imm:OK torvalds#330/2 verifier_gotol/gotol, small_imm @unpriv:OK torvalds#330 verifier_gotol:OK torvalds#338/1 verifier_ldsx/LDSX, S8:OK torvalds#338/2 verifier_ldsx/LDSX, S8 @unpriv:OK torvalds#338/3 verifier_ldsx/LDSX, S16:OK torvalds#338/4 verifier_ldsx/LDSX, S16 @unpriv:OK torvalds#338/5 verifier_ldsx/LDSX, S32:OK torvalds#338/6 verifier_ldsx/LDSX, S32 @unpriv:OK torvalds#338/7 verifier_ldsx/LDSX, S8 range checking, privileged:OK torvalds#338/8 verifier_ldsx/LDSX, S16 range checking:OK torvalds#338/9 verifier_ldsx/LDSX, S16 range checking @unpriv:OK torvalds#338/10 verifier_ldsx/LDSX, S32 range checking:OK torvalds#338/11 verifier_ldsx/LDSX, S32 range checking @unpriv:OK torvalds#338 verifier_ldsx:OK torvalds#349/1 verifier_movsx/MOV32SX, S8:OK torvalds#349/2 verifier_movsx/MOV32SX, S8 @unpriv:OK torvalds#349/3 verifier_movsx/MOV32SX, S16:OK torvalds#349/4 verifier_movsx/MOV32SX, S16 @unpriv:OK torvalds#349/5 verifier_movsx/MOV64SX, S8:OK torvalds#349/6 verifier_movsx/MOV64SX, S8 @unpriv:OK torvalds#349/7 verifier_movsx/MOV64SX, S16:OK torvalds#349/8 verifier_movsx/MOV64SX, S16 @unpriv:OK torvalds#349/9 verifier_movsx/MOV64SX, S32:OK torvalds#349/10 verifier_movsx/MOV64SX, S32 @unpriv:OK torvalds#349/11 verifier_movsx/MOV32SX, S8, range_check:OK torvalds#349/12 verifier_movsx/MOV32SX, S8, range_check @unpriv:OK torvalds#349/13 verifier_movsx/MOV32SX, S16, range_check:OK torvalds#349/14 verifier_movsx/MOV32SX, S16, range_check @unpriv:OK torvalds#349/15 verifier_movsx/MOV32SX, S16, range_check 2:OK torvalds#349/16 verifier_movsx/MOV32SX, S16, range_check 2 @unpriv:OK torvalds#349/17 verifier_movsx/MOV64SX, S8, range_check:OK torvalds#349/18 verifier_movsx/MOV64SX, S8, range_check @unpriv:OK torvalds#349/19 verifier_movsx/MOV64SX, S16, range_check:OK torvalds#349/20 verifier_movsx/MOV64SX, S16, range_check @unpriv:OK torvalds#349/21 verifier_movsx/MOV64SX, S32, range_check:OK torvalds#349/22 verifier_movsx/MOV64SX, S32, range_check @unpriv:OK torvalds#349/23 verifier_movsx/MOV64SX, S16, R10 Sign Extension:OK torvalds#349/24 verifier_movsx/MOV64SX, S16, R10 Sign Extension @unpriv:OK torvalds#349 verifier_movsx:OK torvalds#361/1 verifier_sdiv/SDIV32, non-zero imm divisor, check 1:OK torvalds#361/2 verifier_sdiv/SDIV32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/3 verifier_sdiv/SDIV32, non-zero imm divisor, check 2:OK torvalds#361/4 verifier_sdiv/SDIV32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/5 verifier_sdiv/SDIV32, non-zero imm divisor, check 3:OK torvalds#361/6 verifier_sdiv/SDIV32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/7 verifier_sdiv/SDIV32, non-zero imm divisor, check 4:OK torvalds#361/8 verifier_sdiv/SDIV32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/9 verifier_sdiv/SDIV32, non-zero imm divisor, check 5:OK torvalds#361/10 verifier_sdiv/SDIV32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/11 verifier_sdiv/SDIV32, non-zero imm divisor, check 6:OK torvalds#361/12 verifier_sdiv/SDIV32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/13 verifier_sdiv/SDIV32, non-zero imm divisor, check 7:OK torvalds#361/14 verifier_sdiv/SDIV32, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/15 verifier_sdiv/SDIV32, non-zero imm divisor, check 8:OK torvalds#361/16 verifier_sdiv/SDIV32, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/17 verifier_sdiv/SDIV32, non-zero reg divisor, check 1:OK torvalds#361/18 verifier_sdiv/SDIV32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/19 verifier_sdiv/SDIV32, non-zero reg divisor, check 2:OK torvalds#361/20 verifier_sdiv/SDIV32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/21 verifier_sdiv/SDIV32, non-zero reg divisor, check 3:OK torvalds#361/22 verifier_sdiv/SDIV32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/23 verifier_sdiv/SDIV32, non-zero reg divisor, check 4:OK torvalds#361/24 verifier_sdiv/SDIV32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/25 verifier_sdiv/SDIV32, non-zero reg divisor, check 5:OK torvalds#361/26 verifier_sdiv/SDIV32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/27 verifier_sdiv/SDIV32, non-zero reg divisor, check 6:OK torvalds#361/28 verifier_sdiv/SDIV32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/29 verifier_sdiv/SDIV32, non-zero reg divisor, check 7:OK torvalds#361/30 verifier_sdiv/SDIV32, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/31 verifier_sdiv/SDIV32, non-zero reg divisor, check 8:OK torvalds#361/32 verifier_sdiv/SDIV32, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/33 verifier_sdiv/SDIV64, non-zero imm divisor, check 1:OK torvalds#361/34 verifier_sdiv/SDIV64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/35 verifier_sdiv/SDIV64, non-zero imm divisor, check 2:OK torvalds#361/36 verifier_sdiv/SDIV64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/37 verifier_sdiv/SDIV64, non-zero imm divisor, check 3:OK torvalds#361/38 verifier_sdiv/SDIV64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/39 verifier_sdiv/SDIV64, non-zero imm divisor, check 4:OK torvalds#361/40 verifier_sdiv/SDIV64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/41 verifier_sdiv/SDIV64, non-zero imm divisor, check 5:OK torvalds#361/42 verifier_sdiv/SDIV64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/43 verifier_sdiv/SDIV64, non-zero imm divisor, check 6:OK torvalds#361/44 verifier_sdiv/SDIV64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/45 verifier_sdiv/SDIV64, non-zero reg divisor, check 1:OK torvalds#361/46 verifier_sdiv/SDIV64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/47 verifier_sdiv/SDIV64, non-zero reg divisor, check 2:OK torvalds#361/48 verifier_sdiv/SDIV64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/49 verifier_sdiv/SDIV64, non-zero reg divisor, check 3:OK torvalds#361/50 verifier_sdiv/SDIV64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/51 verifier_sdiv/SDIV64, non-zero reg divisor, check 4:OK torvalds#361/52 verifier_sdiv/SDIV64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/53 verifier_sdiv/SDIV64, non-zero reg divisor, check 5:OK torvalds#361/54 verifier_sdiv/SDIV64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/55 verifier_sdiv/SDIV64, non-zero reg divisor, check 6:OK torvalds#361/56 verifier_sdiv/SDIV64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/57 verifier_sdiv/SMOD32, non-zero imm divisor, check 1:OK torvalds#361/58 verifier_sdiv/SMOD32, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/59 verifier_sdiv/SMOD32, non-zero imm divisor, check 2:OK torvalds#361/60 verifier_sdiv/SMOD32, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/61 verifier_sdiv/SMOD32, non-zero imm divisor, check 3:OK torvalds#361/62 verifier_sdiv/SMOD32, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/63 verifier_sdiv/SMOD32, non-zero imm divisor, check 4:OK torvalds#361/64 verifier_sdiv/SMOD32, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/65 verifier_sdiv/SMOD32, non-zero imm divisor, check 5:OK torvalds#361/66 verifier_sdiv/SMOD32, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/67 verifier_sdiv/SMOD32, non-zero imm divisor, check 6:OK torvalds#361/68 verifier_sdiv/SMOD32, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/69 verifier_sdiv/SMOD32, non-zero reg divisor, check 1:OK torvalds#361/70 verifier_sdiv/SMOD32, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/71 verifier_sdiv/SMOD32, non-zero reg divisor, check 2:OK torvalds#361/72 verifier_sdiv/SMOD32, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/73 verifier_sdiv/SMOD32, non-zero reg divisor, check 3:OK torvalds#361/74 verifier_sdiv/SMOD32, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/75 verifier_sdiv/SMOD32, non-zero reg divisor, check 4:OK torvalds#361/76 verifier_sdiv/SMOD32, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/77 verifier_sdiv/SMOD32, non-zero reg divisor, check 5:OK torvalds#361/78 verifier_sdiv/SMOD32, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/79 verifier_sdiv/SMOD32, non-zero reg divisor, check 6:OK torvalds#361/80 verifier_sdiv/SMOD32, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/81 verifier_sdiv/SMOD64, non-zero imm divisor, check 1:OK torvalds#361/82 verifier_sdiv/SMOD64, non-zero imm divisor, check 1 @unpriv:OK torvalds#361/83 verifier_sdiv/SMOD64, non-zero imm divisor, check 2:OK torvalds#361/84 verifier_sdiv/SMOD64, non-zero imm divisor, check 2 @unpriv:OK torvalds#361/85 verifier_sdiv/SMOD64, non-zero imm divisor, check 3:OK torvalds#361/86 verifier_sdiv/SMOD64, non-zero imm divisor, check 3 @unpriv:OK torvalds#361/87 verifier_sdiv/SMOD64, non-zero imm divisor, check 4:OK torvalds#361/88 verifier_sdiv/SMOD64, non-zero imm divisor, check 4 @unpriv:OK torvalds#361/89 verifier_sdiv/SMOD64, non-zero imm divisor, check 5:OK torvalds#361/90 verifier_sdiv/SMOD64, non-zero imm divisor, check 5 @unpriv:OK torvalds#361/91 verifier_sdiv/SMOD64, non-zero imm divisor, check 6:OK torvalds#361/92 verifier_sdiv/SMOD64, non-zero imm divisor, check 6 @unpriv:OK torvalds#361/93 verifier_sdiv/SMOD64, non-zero imm divisor, check 7:OK torvalds#361/94 verifier_sdiv/SMOD64, non-zero imm divisor, check 7 @unpriv:OK torvalds#361/95 verifier_sdiv/SMOD64, non-zero imm divisor, check 8:OK torvalds#361/96 verifier_sdiv/SMOD64, non-zero imm divisor, check 8 @unpriv:OK torvalds#361/97 verifier_sdiv/SMOD64, non-zero reg divisor, check 1:OK torvalds#361/98 verifier_sdiv/SMOD64, non-zero reg divisor, check 1 @unpriv:OK torvalds#361/99 verifier_sdiv/SMOD64, non-zero reg divisor, check 2:OK torvalds#361/100 verifier_sdiv/SMOD64, non-zero reg divisor, check 2 @unpriv:OK torvalds#361/101 verifier_sdiv/SMOD64, non-zero reg divisor, check 3:OK torvalds#361/102 verifier_sdiv/SMOD64, non-zero reg divisor, check 3 @unpriv:OK torvalds#361/103 verifier_sdiv/SMOD64, non-zero reg divisor, check 4:OK torvalds#361/104 verifier_sdiv/SMOD64, non-zero reg divisor, check 4 @unpriv:OK torvalds#361/105 verifier_sdiv/SMOD64, non-zero reg divisor, check 5:OK torvalds#361/106 verifier_sdiv/SMOD64, non-zero reg divisor, check 5 @unpriv:OK torvalds#361/107 verifier_sdiv/SMOD64, non-zero reg divisor, check 6:OK torvalds#361/108 verifier_sdiv/SMOD64, non-zero reg divisor, check 6 @unpriv:OK torvalds#361/109 verifier_sdiv/SMOD64, non-zero reg divisor, check 7:OK torvalds#361/110 verifier_sdiv/SMOD64, non-zero reg divisor, check 7 @unpriv:OK torvalds#361/111 verifier_sdiv/SMOD64, non-zero reg divisor, check 8:OK torvalds#361/112 verifier_sdiv/SMOD64, non-zero reg divisor, check 8 @unpriv:OK torvalds#361/113 verifier_sdiv/SDIV32, zero divisor:OK torvalds#361/114 verifier_sdiv/SDIV32, zero divisor @unpriv:OK torvalds#361/115 verifier_sdiv/SDIV64, zero divisor:OK torvalds#361/116 verifier_sdiv/SDIV64, zero divisor @unpriv:OK torvalds#361/117 verifier_sdiv/SMOD32, zero divisor:OK torvalds#361/118 verifier_sdiv/SMOD32, zero divisor @unpriv:OK torvalds#361/119 verifier_sdiv/SMOD64, zero divisor:OK torvalds#361/120 verifier_sdiv/SMOD64, zero divisor @unpriv:OK torvalds#361 verifier_sdiv:OK Summary: 5/163 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t ldsx_insn test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116/2 ldsx_insn/ctx_member_sign_ext:OK torvalds#116/3 ldsx_insn/ctx_member_narrow_sign_ext:OK torvalds#116 ldsx_insn:FAIL All error logs: test_map_val_and_probed_memory:PASS:test_ldsx_insn__open 0 nsec test_map_val_and_probed_memory:PASS:test_ldsx_insn__load 0 nsec libbpf: prog 'test_ptr_struct_arg': failed to attach: ERROR: strerror_r(-524)=22 libbpf: prog 'test_ptr_struct_arg': failed to auto-attach: -524 test_map_val_and_probed_memory:FAIL:test_ldsx_insn__attach unexpected error: -524 (errno 524) torvalds#116/1 ldsx_insn/map_val and probed_memory:FAIL torvalds#116 ldsx_insn:FAIL Summary: 0/2 PASSED, 0 SKIPPED, 1 FAILED Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Aug 14, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(), but actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Signed-off-by: Sean Christopherson <[email protected]> Acked-by: Kai Huang <[email protected]> Reviewed-by: Kai Huang <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Aug 14, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(), but actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Signed-off-by: Sean Christopherson <[email protected]> Acked-by: Kai Huang <[email protected]> Reviewed-by: Kai Huang <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
sean-jc
added a commit
to sean-jc/linux
that referenced
this pull request
Aug 28, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching VM destruction to e.g. call_rcu() is a much more involved change. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]>
sean-jc
added a commit
to sean-jc/linux
that referenced
this pull request
Aug 30, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Aug 30, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]>
lougovsk
pushed a commit
to lougovsk/linux
that referenced
this pull request
Aug 30, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]>
lougovsk
pushed a commit
to lougovsk/linux
that referenced
this pull request
Aug 30, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]>
intel-lab-lkp
pushed a commit
to intel-lab-lkp/linux
that referenced
this pull request
Sep 6, 2024
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 2, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 2, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
mj22226
pushed a commit
to mj22226/linux
that referenced
this pull request
Oct 3, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
KexyBiscuit
pushed a commit
to AOSC-Tracking/linux
that referenced
this pull request
Oct 4, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
ptr1337
pushed a commit
to CachyOS/linux
that referenced
this pull request
Oct 4, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 4, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1054009064
pushed a commit
to 1054009064/linux
that referenced
this pull request
Oct 4, 2024
commit 44d1745 upstream. Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 torvalds#6 will wait on CPU0 #1, CPU0 torvalds#8 will wait on CPU2 #3, and CPU2 torvalds#7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip torvalds#330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 synchronize_srcu_expedited+0x21/0x30 kvm_swap_active_memslots+0x110/0x1c0 [kvm] kvm_set_memslot+0x360/0x620 [kvm] __kvm_set_memory_region+0x27b/0x300 [kvm] kvm_vm_ioctl_set_memory_region+0x43/0x60 [kvm] kvm_vm_ioctl+0x295/0x650 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (&kvm->slots_lock){+.+.}-{3:3}: __lock_acquire+0x15ef/0x2e30 lock_acquire+0xe0/0x260 __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 set_nx_huge_pages+0x179/0x1e0 [kvm] param_attr_store+0x93/0x100 module_attr_store+0x22/0x40 sysfs_kf_write+0x81/0xb0 kernfs_fop_write_iter+0x133/0x1d0 vfs_write+0x28d/0x380 ksys_write+0x70/0xe0 __x64_sys_write+0x1f/0x30 x64_sys_call+0x281b/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e Cc: Chao Gao <[email protected]> Fixes: 0bf5049 ("KVM: Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock") Cc: [email protected] Reviewed-by: Kai Huang <[email protected]> Acked-by: Kai Huang <[email protected]> Tested-by: Farrah Chen <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-ID: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.