forked from linux-kernel-labs/linux
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test commit #2
Closed
Test commit #2
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Daniel Baluta <[email protected]>
Signed-off-by: Daniel Baluta <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
May 11, 2020
FuzzUSB (a variant of syzkaller) found a free-while-still-in-use bug in the USB scatter-gather library: BUG: KASAN: use-after-free in atomic_read include/asm-generic/atomic-instrumented.h:26 [inline] BUG: KASAN: use-after-free in usb_hcd_unlink_urb+0x5f/0x170 drivers/usb/core/hcd.c:1607 Read of size 4 at addr ffff888065379610 by task kworker/u4:1/27 CPU: 1 PID: 27 Comm: kworker/u4:1 Not tainted 5.5.11 #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Workqueue: scsi_tmf_2 scmd_eh_abort_handler Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xce/0x128 lib/dump_stack.c:118 print_address_description.constprop.4+0x21/0x3c0 mm/kasan/report.c:374 __kasan_report+0x153/0x1cb mm/kasan/report.c:506 kasan_report+0x12/0x20 mm/kasan/common.c:639 check_memory_region_inline mm/kasan/generic.c:185 [inline] check_memory_region+0x152/0x1b0 mm/kasan/generic.c:192 __kasan_check_read+0x11/0x20 mm/kasan/common.c:95 atomic_read include/asm-generic/atomic-instrumented.h:26 [inline] usb_hcd_unlink_urb+0x5f/0x170 drivers/usb/core/hcd.c:1607 usb_unlink_urb+0x72/0xb0 drivers/usb/core/urb.c:657 usb_sg_cancel+0x14e/0x290 drivers/usb/core/message.c:602 usb_stor_stop_transport+0x5e/0xa0 drivers/usb/storage/transport.c:937 This bug occurs when cancellation of the S-G transfer races with transfer completion. When that happens, usb_sg_cancel() may continue to access the transfer's URBs after usb_sg_wait() has freed them. The bug is caused by the fact that usb_sg_cancel() does not take any sort of reference to the transfer, and so there is nothing to prevent the URBs from being deallocated while the routine is trying to use them. The fix is to take such a reference by incrementing the transfer's io->count field while the cancellation is in progres and decrementing it afterward. The transfer's URBs are not deallocated until io->complete is triggered, which happens when io->count reaches zero. Signed-off-by: Alan Stern <[email protected]> Reported-and-tested-by: Kyungtae Kim <[email protected]> CC: <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
…f fs_info::journal_info [BUG] One run of btrfs/063 triggered the following lockdep warning: ============================================ WARNING: possible recursive locking detected 5.6.0-rc7-custom+ linux-kernel-labs#48 Not tainted -------------------------------------------- kworker/u24:0/7 is trying to acquire lock: ffff88817d3a46e0 (sb_internal#2){.+.+}, at: start_transaction+0x66c/0x890 [btrfs] but task is already holding lock: ffff88817d3a46e0 (sb_internal#2){.+.+}, at: start_transaction+0x66c/0x890 [btrfs] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(sb_internal#2); lock(sb_internal#2); *** DEADLOCK *** May be due to missing lock nesting notation 4 locks held by kworker/u24:0/7: #0: ffff88817b495948 ((wq_completion)btrfs-endio-write){+.+.}, at: process_one_work+0x557/0xb80 #1: ffff888189ea7db8 ((work_completion)(&work->normal_work)){+.+.}, at: process_one_work+0x557/0xb80 #2: ffff88817d3a46e0 (sb_internal#2){.+.+}, at: start_transaction+0x66c/0x890 [btrfs] #3: ffff888174ca4da8 (&fs_info->reloc_mutex){+.+.}, at: btrfs_record_root_in_trans+0x83/0xd0 [btrfs] stack backtrace: CPU: 0 PID: 7 Comm: kworker/u24:0 Not tainted 5.6.0-rc7-custom+ linux-kernel-labs#48 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Workqueue: btrfs-endio-write btrfs_work_helper [btrfs] Call Trace: dump_stack+0xc2/0x11a __lock_acquire.cold+0xce/0x214 lock_acquire+0xe6/0x210 __sb_start_write+0x14e/0x290 start_transaction+0x66c/0x890 [btrfs] btrfs_join_transaction+0x1d/0x20 [btrfs] find_free_extent+0x1504/0x1a50 [btrfs] btrfs_reserve_extent+0xd5/0x1f0 [btrfs] btrfs_alloc_tree_block+0x1ac/0x570 [btrfs] btrfs_copy_root+0x213/0x580 [btrfs] create_reloc_root+0x3bd/0x470 [btrfs] btrfs_init_reloc_root+0x2d2/0x310 [btrfs] record_root_in_trans+0x191/0x1d0 [btrfs] btrfs_record_root_in_trans+0x90/0xd0 [btrfs] start_transaction+0x16e/0x890 [btrfs] btrfs_join_transaction+0x1d/0x20 [btrfs] btrfs_finish_ordered_io+0x55d/0xcd0 [btrfs] finish_ordered_fn+0x15/0x20 [btrfs] btrfs_work_helper+0x116/0x9a0 [btrfs] process_one_work+0x632/0xb80 worker_thread+0x80/0x690 kthread+0x1a3/0x1f0 ret_from_fork+0x27/0x50 It's pretty hard to reproduce, only one hit so far. [CAUSE] This is because we're calling btrfs_join_transaction() without re-using the current running one: btrfs_finish_ordered_io() |- btrfs_join_transaction() <<< Call #1 |- btrfs_record_root_in_trans() |- btrfs_reserve_extent() |- btrfs_join_transaction() <<< Call #2 Normally such btrfs_join_transaction() call should re-use the existing one, without trying to re-start a transaction. But the problem is, in btrfs_join_transaction() call #1, we call btrfs_record_root_in_trans() before initializing current::journal_info. And in btrfs_join_transaction() call #2, we're relying on current::journal_info to avoid such deadlock. [FIX] Call btrfs_record_root_in_trans() after we have initialized current::journal_info. CC: [email protected] # 4.4+ Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
…kernel/git/kvmarm/kvmarm into kvm-master KVM/arm fixes for Linux 5.7, take #2 - Fix compilation with Clang - Correctly initialize GICv4.1 in the absence of a virtual ITS - Move SP_EL0 save/restore to the guest entry/exit code - Handle PC wrap around on 32bit guests, and narrow all 32bit registers on userspace access
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
abs_vdebt is an atomic_64 which tracks how much over budget a given cgroup is and controls the activation of use_delay mechanism. Once a cgroup goes over budget from forced IOs, it has to pay it back with its future budget. The progress guarantee on debt paying comes from the iocg being active - active iocgs are processed by the periodic timer, which ensures that as time passes the debts dissipate and the iocg returns to normal operation. However, both iocg activation and vdebt handling are asynchronous and a sequence like the following may happen. 1. The iocg is in the process of being deactivated by the periodic timer. 2. A bio enters ioc_rqos_throttle(), calls iocg_activate() which returns without anything because it still sees that the iocg is already active. 3. The iocg is deactivated. 4. The bio from #2 is over budget but needs to be forced. It increases abs_vdebt and goes over the threshold and enables use_delay. 5. IO control is enabled for the iocg's subtree and now IOs are attributed to the descendant cgroups and the iocg itself no longer issues IOs. This leaves the iocg with stuck abs_vdebt - it has debt but inactive and no further IOs which can activate it. This can end up unduly punishing all the descendants cgroups. The usual throttling path has the same issue - the iocg must be active while throttled to ensure that future event will wake it up - and solves the problem by synchronizing the throttling path with a spinlock. abs_vdebt handling is another form of overage handling and shares a lot of characteristics including the fact that it isn't in the hottest path. This patch fixes the above and other possible races by strictly synchronizing abs_vdebt and use_delay handling with iocg->waitq.lock. Signed-off-by: Tejun Heo <[email protected]> Reported-by: Vlad Dmitriev <[email protected]> Cc: [email protected] # v5.4+ Fixes: e1518f6 ("blk-iocost: Don't let merges push vtime into the future") Signed-off-by: Jens Axboe <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
Since 5.7-rc1, on btrfs we have a percpu counter initialization for which we always pass a GFP_KERNEL gfp_t argument (this happens since commit 2992df7 ("btrfs: Implement DREW lock")). That is safe in some contextes but not on others where allowing fs reclaim could lead to a deadlock because we are either holding some btrfs lock needed for a transaction commit or holding a btrfs transaction handle open. Because of that we surround the call to the function that initializes the percpu counter with a NOFS context using memalloc_nofs_save() (this is done at btrfs_init_fs_root()). However it turns out that this is not enough to prevent a possible deadlock because percpu_alloc() determines if it is in an atomic context by looking exclusively at the gfp flags passed to it (GFP_KERNEL in this case) and it is not aware that a NOFS context is set. Because percpu_alloc() thinks it is in a non atomic context it locks the pcpu_alloc_mutex. This can result in a btrfs deadlock when pcpu_balance_workfn() is running, has acquired that mutex and is waiting for reclaim, while the btrfs task that called percpu_counter_init() (and therefore percpu_alloc()) is holding either the btrfs commit_root semaphore or a transaction handle (done fs/btrfs/backref.c: iterate_extent_inodes()), which prevents reclaim from finishing as an attempt to commit the current btrfs transaction will deadlock. Lockdep reports this issue with the following trace: ====================================================== WARNING: possible circular locking dependency detected 5.6.0-rc7-btrfs-next-77 #1 Not tainted ------------------------------------------------------ kswapd0/91 is trying to acquire lock: ffff8938a3b3fdc8 (&delayed_node->mutex){+.+.}, at: __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] but task is already holding lock: ffffffffb4f0dbc0 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (fs_reclaim){+.+.}: fs_reclaim_acquire.part.0+0x25/0x30 __kmalloc+0x5f/0x3a0 pcpu_create_chunk+0x19/0x230 pcpu_balance_workfn+0x56a/0x680 process_one_work+0x235/0x5f0 worker_thread+0x50/0x3b0 kthread+0x120/0x140 ret_from_fork+0x3a/0x50 -> #3 (pcpu_alloc_mutex){+.+.}: __mutex_lock+0xa9/0xaf0 pcpu_alloc+0x480/0x7c0 __percpu_counter_init+0x50/0xd0 btrfs_drew_lock_init+0x22/0x70 [btrfs] btrfs_get_fs_root+0x29c/0x5c0 [btrfs] resolve_indirect_refs+0x120/0xa30 [btrfs] find_parent_nodes+0x50b/0xf30 [btrfs] btrfs_find_all_leafs+0x60/0xb0 [btrfs] iterate_extent_inodes+0x139/0x2f0 [btrfs] iterate_inodes_from_logical+0xa1/0xe0 [btrfs] btrfs_ioctl_logical_to_ino+0xb4/0x190 [btrfs] btrfs_ioctl+0x165a/0x3130 [btrfs] ksys_ioctl+0x87/0xc0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x5c/0x260 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #2 (&fs_info->commit_root_sem){++++}: down_write+0x38/0x70 btrfs_cache_block_group+0x2ec/0x500 [btrfs] find_free_extent+0xc6a/0x1600 [btrfs] btrfs_reserve_extent+0x9b/0x180 [btrfs] btrfs_alloc_tree_block+0xc1/0x350 [btrfs] alloc_tree_block_no_bg_flush+0x4a/0x60 [btrfs] __btrfs_cow_block+0x122/0x5a0 [btrfs] btrfs_cow_block+0x106/0x240 [btrfs] commit_cowonly_roots+0x55/0x310 [btrfs] btrfs_commit_transaction+0x509/0xb20 [btrfs] sync_filesystem+0x74/0x90 generic_shutdown_super+0x22/0x100 kill_anon_super+0x14/0x30 btrfs_kill_super+0x12/0x20 [btrfs] deactivate_locked_super+0x31/0x70 cleanup_mnt+0x100/0x160 task_work_run+0x93/0xc0 exit_to_usermode_loop+0xf9/0x100 do_syscall_64+0x20d/0x260 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #1 (&space_info->groups_sem){++++}: down_read+0x3c/0x140 find_free_extent+0xef6/0x1600 [btrfs] btrfs_reserve_extent+0x9b/0x180 [btrfs] btrfs_alloc_tree_block+0xc1/0x350 [btrfs] alloc_tree_block_no_bg_flush+0x4a/0x60 [btrfs] __btrfs_cow_block+0x122/0x5a0 [btrfs] btrfs_cow_block+0x106/0x240 [btrfs] btrfs_search_slot+0x50c/0xd60 [btrfs] btrfs_lookup_inode+0x3a/0xc0 [btrfs] __btrfs_update_delayed_inode+0x90/0x280 [btrfs] __btrfs_commit_inode_delayed_items+0x81f/0x870 [btrfs] __btrfs_run_delayed_items+0x8e/0x180 [btrfs] btrfs_commit_transaction+0x31b/0xb20 [btrfs] iterate_supers+0x87/0xf0 ksys_sync+0x60/0xb0 __ia32_sys_sync+0xa/0x10 do_syscall_64+0x5c/0x260 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&delayed_node->mutex){+.+.}: __lock_acquire+0xef0/0x1c80 lock_acquire+0xa2/0x1d0 __mutex_lock+0xa9/0xaf0 __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] btrfs_evict_inode+0x40d/0x560 [btrfs] evict+0xd9/0x1c0 dispose_list+0x48/0x70 prune_icache_sb+0x54/0x80 super_cache_scan+0x124/0x1a0 do_shrink_slab+0x176/0x440 shrink_slab+0x23a/0x2c0 shrink_node+0x188/0x6e0 balance_pgdat+0x31d/0x7f0 kswapd+0x238/0x550 kthread+0x120/0x140 ret_from_fork+0x3a/0x50 other info that might help us debug this: Chain exists of: &delayed_node->mutex --> pcpu_alloc_mutex --> fs_reclaim Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(pcpu_alloc_mutex); lock(fs_reclaim); lock(&delayed_node->mutex); *** DEADLOCK *** 3 locks held by kswapd0/91: #0: (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30 #1: (shrinker_rwsem){++++}, at: shrink_slab+0x12f/0x2c0 #2: (&type->s_umount_key#43){++++}, at: trylock_super+0x16/0x50 stack backtrace: CPU: 1 PID: 91 Comm: kswapd0 Not tainted 5.6.0-rc7-btrfs-next-77 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-0-ga698c8995f-prebuilt.qemu.org 04/01/2014 Call Trace: dump_stack+0x8f/0xd0 check_noncircular+0x170/0x190 __lock_acquire+0xef0/0x1c80 lock_acquire+0xa2/0x1d0 __mutex_lock+0xa9/0xaf0 __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] btrfs_evict_inode+0x40d/0x560 [btrfs] evict+0xd9/0x1c0 dispose_list+0x48/0x70 prune_icache_sb+0x54/0x80 super_cache_scan+0x124/0x1a0 do_shrink_slab+0x176/0x440 shrink_slab+0x23a/0x2c0 shrink_node+0x188/0x6e0 balance_pgdat+0x31d/0x7f0 kswapd+0x238/0x550 kthread+0x120/0x140 ret_from_fork+0x3a/0x50 This could be fixed by making btrfs pass GFP_NOFS instead of GFP_KERNEL to percpu_counter_init() in contextes where it is not reclaim safe, however that type of approach is discouraged since memalloc_[nofs|noio]_save() were introduced. Therefore this change makes pcpu_alloc() look up into an existing nofs/noio context before deciding whether it is in an atomic context or not. Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Acked-by: Tejun Heo <[email protected]> Acked-by: Dennis Zhou <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Christoph Lameter <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
This BUG halt was reported a while back, but the patch somehow got missed: PID: 2879 TASK: c16adaa0 CPU: 1 COMMAND: "sctpn" #0 [f418dd28] crash_kexec at c04a7d8c #1 [f418dd7c] oops_end at c0863e02 #2 [f418dd90] do_invalid_op at c040aaca #3 [f418de28] error_code (via invalid_op) at c08631a5 EAX: f34baac0 EBX: 00000090 ECX: f418deb0 EDX: f5542950 EBP: 00000000 DS: 007b ESI: f34ba800 ES: 007b EDI: f418dea0 GS: 00e0 CS: 0060 EIP: c046fa5e ERR: ffffffff EFLAGS: 00010286 #4 [f418de5c] add_timer at c046fa5e #5 [f418de68] sctp_do_sm at f8db8c77 [sctp] #6 [f418df30] sctp_primitive_SHUTDOWN at f8dcc1b5 [sctp] linux-kernel-labs#7 [f418df48] inet_shutdown at c080baf9 linux-kernel-labs#8 [f418df5c] sys_shutdown at c079eedf linux-kernel-labs#9 [f418df7] sys_socketcall at c079fe88 EAX: ffffffda EBX: 0000000d ECX: bfceea90 EDX: 0937af98 DS: 007b ESI: 0000000c ES: 007b EDI: b7150ae4 SS: 007b ESP: bfceea7c EBP: bfceeaa8 GS: 0033 CS: 0073 EIP: b775c424 ERR: 00000066 EFLAGS: 00000282 It appears that the side effect that starts the shutdown timer was processed multiple times, which can happen as multiple paths can trigger it. This of course leads to the BUG halt in add_timer getting called. Fix seems pretty straightforward, just check before the timer is added if its already been started. If it has mod the timer instead to min(current expiration, new expiration) Its been tested but not confirmed to fix the problem, as the issue has only occured in production environments where test kernels are enjoined from being installed. It appears to be a sane fix to me though. Also, recentely, Jere found a reproducer posted on list to confirm that this resolves the issues Signed-off-by: Neil Horman <[email protected]> CC: Vlad Yasevich <[email protected]> CC: "David S. Miller" <[email protected]> CC: [email protected] CC: [email protected] CC: [email protected] Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
Ido Schimmel says: ==================== netdevsim: Two small fixes Fix two bugs observed while analyzing regression failures. Patch #1 fixes a bug where sometimes the drop counter of a packet trap policer would not increase. Patch #2 adds a missing initialization of a variable in a related selftest. ==================== Signed-off-by: David S. Miller <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
Ido Schimmel says: ==================== mlxsw: Various fixes Patch #1 from Jiri fixes a use-after-free discovered while fuzzing mlxsw / devlink with syzkaller. Patch #2 from Amit works around a limitation in new versions of arping, which is used in several selftests. ==================== Signed-off-by: David S. Miller <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jun 17, 2020
…inux/kernel/git/dhowells/linux-fs David Howells says: ==================== rxrpc: Fix a warning and a leak [ver #2] Here are a couple of fixes for AF_RXRPC: (1) Fix an uninitialised variable warning. (2) Fix a leak of the ticket on error in rxkad. ==================== Signed-off-by: David S. Miller <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Jul 16, 2020
GFP_KERNEL flag specifies a normal kernel allocation in which executing in process context without any locks and can sleep. mmio_diff takes sometime to finish all the diff compare and it has locks, continue using GFP_KERNEL will output below trace if LOCKDEP enabled. Use GFP_ATOMIC instead. V2: Rebase. ===================================================== WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected 5.7.0-rc2 linux-kernel-labs#400 Not tainted ----------------------------------------------------- is trying to acquire: ffffffffb47bea20 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire.part.0+0x0/0x30 and this task is already holding: ffff88845b85cc90 (&gvt->scheduler.mmio_context_lock){+.-.}-{2:2}, at: vgpu_mmio_diff_show+0xcf/0x2e0 which would create a new lock dependency: (&gvt->scheduler.mmio_context_lock){+.-.}-{2:2} -> (fs_reclaim){+.+.}-{0:0} but this new dependency connects a SOFTIRQ-irq-safe lock: (&gvt->scheduler.mmio_context_lock){+.-.}-{2:2} ... which became SOFTIRQ-irq-safe at: lock_acquire+0x175/0x4e0 _raw_spin_lock_irqsave+0x2b/0x40 shadow_context_status_change+0xfe/0x2f0 notifier_call_chain+0x6a/0xa0 __atomic_notifier_call_chain+0x5f/0xf0 execlists_schedule_out+0x42a/0x820 process_csb+0xe7/0x3e0 execlists_submission_tasklet+0x5c/0x1d0 tasklet_action_common.isra.0+0xeb/0x260 __do_softirq+0x11d/0x56f irq_exit+0xf6/0x100 do_IRQ+0x7f/0x160 ret_from_intr+0x0/0x2a cpuidle_enter_state+0xcd/0x5b0 cpuidle_enter+0x37/0x60 do_idle+0x337/0x3f0 cpu_startup_entry+0x14/0x20 start_kernel+0x58b/0x5c5 secondary_startup_64+0xa4/0xb0 to a SOFTIRQ-irq-unsafe lock: (fs_reclaim){+.+.}-{0:0} ... which became SOFTIRQ-irq-unsafe at: ... lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_node_trace+0x2e/0x290 alloc_worker+0x2b/0xb0 init_rescuer.part.0+0x17/0xe0 workqueue_init+0x293/0x3bb kernel_init_freeable+0x149/0x325 kernel_init+0x8/0x116 ret_from_fork+0x3a/0x50 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); local_irq_disable(); lock(&gvt->scheduler.mmio_context_lock); lock(fs_reclaim); <Interrupt> lock(&gvt->scheduler.mmio_context_lock); *** DEADLOCK *** 3 locks held by cat/1439: #0: ffff888444a23698 (&p->lock){+.+.}-{3:3}, at: seq_read+0x49/0x680 #1: ffff88845b858068 (&gvt->lock){+.+.}-{3:3}, at: vgpu_mmio_diff_show+0xc7/0x2e0 #2: ffff88845b85cc90 (&gvt->scheduler.mmio_context_lock){+.-.}-{2:2}, at: vgpu_mmio_diff_show+0xcf/0x2e0 the dependencies between SOFTIRQ-irq-safe lock and the holding lock: -> (&gvt->scheduler.mmio_context_lock){+.-.}-{2:2} ops: 31 { HARDIRQ-ON-W at: lock_acquire+0x175/0x4e0 _raw_spin_lock_bh+0x2f/0x40 vgpu_mmio_diff_show+0xcf/0x2e0 seq_read+0x242/0x680 full_proxy_read+0x95/0xc0 vfs_read+0xc2/0x1b0 ksys_read+0xc4/0x160 do_syscall_64+0x63/0x290 entry_SYSCALL_64_after_hwframe+0x49/0xb3 IN-SOFTIRQ-W at: lock_acquire+0x175/0x4e0 _raw_spin_lock_irqsave+0x2b/0x40 shadow_context_status_change+0xfe/0x2f0 notifier_call_chain+0x6a/0xa0 __atomic_notifier_call_chain+0x5f/0xf0 execlists_schedule_out+0x42a/0x820 process_csb+0xe7/0x3e0 execlists_submission_tasklet+0x5c/0x1d0 tasklet_action_common.isra.0+0xeb/0x260 __do_softirq+0x11d/0x56f irq_exit+0xf6/0x100 do_IRQ+0x7f/0x160 ret_from_intr+0x0/0x2a cpuidle_enter_state+0xcd/0x5b0 cpuidle_enter+0x37/0x60 do_idle+0x337/0x3f0 cpu_startup_entry+0x14/0x20 start_kernel+0x58b/0x5c5 secondary_startup_64+0xa4/0xb0 INITIAL USE at: lock_acquire+0x175/0x4e0 _raw_spin_lock_irqsave+0x2b/0x40 shadow_context_status_change+0xfe/0x2f0 notifier_call_chain+0x6a/0xa0 __atomic_notifier_call_chain+0x5f/0xf0 execlists_schedule_in+0x2c8/0x690 __execlists_submission_tasklet+0x1303/0x1930 execlists_submit_request+0x1e7/0x230 submit_notify+0x105/0x2a4 __i915_sw_fence_complete+0xaa/0x380 __engine_park+0x313/0x5a0 ____intel_wakeref_put_last+0x3e/0x90 intel_gt_resume+0x41e/0x440 intel_gt_init+0x283/0xbc0 i915_gem_init+0x197/0x240 i915_driver_probe+0xc2d/0x12e0 i915_pci_probe+0xa2/0x1e0 local_pci_probe+0x6f/0xb0 pci_device_probe+0x171/0x230 really_probe+0x17a/0x380 driver_probe_device+0x70/0xf0 device_driver_attach+0x82/0x90 __driver_attach+0x60/0x100 bus_for_each_dev+0xe4/0x140 bus_add_driver+0x257/0x2a0 driver_register+0xd3/0x150 i915_init+0x6d/0x80 do_one_initcall+0xb8/0x3a0 kernel_init_freeable+0x2b4/0x325 kernel_init+0x8/0x116 ret_from_fork+0x3a/0x50 } __key.77812+0x0/0x40 ... acquired at: lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_trace+0x2e/0x260 mmio_diff_handler+0xc0/0x150 intel_gvt_for_each_tracked_mmio+0x7b/0x140 vgpu_mmio_diff_show+0x111/0x2e0 seq_read+0x242/0x680 full_proxy_read+0x95/0xc0 vfs_read+0xc2/0x1b0 ksys_read+0xc4/0x160 do_syscall_64+0x63/0x290 entry_SYSCALL_64_after_hwframe+0x49/0xb3 the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock: -> (fs_reclaim){+.+.}-{0:0} ops: 1999031 { HARDIRQ-ON-W at: lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_node_trace+0x2e/0x290 alloc_worker+0x2b/0xb0 init_rescuer.part.0+0x17/0xe0 workqueue_init+0x293/0x3bb kernel_init_freeable+0x149/0x325 kernel_init+0x8/0x116 ret_from_fork+0x3a/0x50 SOFTIRQ-ON-W at: lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_node_trace+0x2e/0x290 alloc_worker+0x2b/0xb0 init_rescuer.part.0+0x17/0xe0 workqueue_init+0x293/0x3bb kernel_init_freeable+0x149/0x325 kernel_init+0x8/0x116 ret_from_fork+0x3a/0x50 INITIAL USE at: lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_node_trace+0x2e/0x290 alloc_worker+0x2b/0xb0 init_rescuer.part.0+0x17/0xe0 workqueue_init+0x293/0x3bb kernel_init_freeable+0x149/0x325 kernel_init+0x8/0x116 ret_from_fork+0x3a/0x50 } __fs_reclaim_map+0x0/0x60 ... acquired at: lock_acquire+0x175/0x4e0 fs_reclaim_acquire.part.0+0x20/0x30 kmem_cache_alloc_trace+0x2e/0x260 mmio_diff_handler+0xc0/0x150 intel_gvt_for_each_tracked_mmio+0x7b/0x140 vgpu_mmio_diff_show+0x111/0x2e0 seq_read+0x242/0x680 full_proxy_read+0x95/0xc0 vfs_read+0xc2/0x1b0 ksys_read+0xc4/0x160 do_syscall_64+0x63/0x290 entry_SYSCALL_64_after_hwframe+0x49/0xb3 stack backtrace: CPU: 5 PID: 1439 Comm: cat Not tainted 5.7.0-rc2 linux-kernel-labs#400 Hardware name: Intel(R) Client Systems NUC8i7BEH/NUC8BEB, BIOS BECFL357.86A.0056.2018.1128.1717 11/28/2018 Call Trace: dump_stack+0x97/0xe0 check_irq_usage.cold+0x428/0x434 ? check_usage_forwards+0x2c0/0x2c0 ? class_equal+0x11/0x20 ? __bfs+0xd2/0x2d0 ? in_any_class_list+0xa0/0xa0 ? check_path+0x22/0x40 ? check_noncircular+0x150/0x2b0 ? print_circular_bug.isra.0+0x1b0/0x1b0 ? mark_lock+0x13d/0xc50 ? __lock_acquire+0x1e32/0x39b0 __lock_acquire+0x1e32/0x39b0 ? timerqueue_add+0xc1/0x130 ? register_lock_class+0xa60/0xa60 ? mark_lock+0x13d/0xc50 lock_acquire+0x175/0x4e0 ? __zone_pcp_update+0x80/0x80 ? check_flags.part.0+0x210/0x210 ? mark_held_locks+0x65/0x90 ? _raw_spin_unlock_irqrestore+0x32/0x40 ? lockdep_hardirqs_on+0x190/0x290 ? fwtable_read32+0x163/0x480 ? mmio_diff_handler+0xc0/0x150 fs_reclaim_acquire.part.0+0x20/0x30 ? __zone_pcp_update+0x80/0x80 kmem_cache_alloc_trace+0x2e/0x260 mmio_diff_handler+0xc0/0x150 ? vgpu_mmio_diff_open+0x30/0x30 intel_gvt_for_each_tracked_mmio+0x7b/0x140 vgpu_mmio_diff_show+0x111/0x2e0 ? mmio_diff_handler+0x150/0x150 ? rcu_read_lock_sched_held+0xa0/0xb0 ? rcu_read_lock_bh_held+0xc0/0xc0 ? kasan_unpoison_shadow+0x33/0x40 ? __kasan_kmalloc.constprop.0+0xc2/0xd0 seq_read+0x242/0x680 ? debugfs_locked_down.isra.0+0x70/0x70 full_proxy_read+0x95/0xc0 vfs_read+0xc2/0x1b0 ksys_read+0xc4/0x160 ? kernel_write+0xb0/0xb0 ? mark_held_locks+0x24/0x90 do_syscall_64+0x63/0x290 entry_SYSCALL_64_after_hwframe+0x49/0xb3 RIP: 0033:0x7ffbe3e6efb2 Code: c0 e9 c2 fe ff ff 50 48 8d 3d ca cb 0a 00 e8 f5 19 02 00 0f 1f 44 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 RSP: 002b:00007ffd021c08a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007ffbe3e6efb2 RDX: 0000000000020000 RSI: 00007ffbe34cd000 RDI: 0000000000000003 RBP: 00007ffbe34cd000 R08: 00007ffbe34cc010 R09: 0000000000000000 R10: 0000000000000022 R11: 0000000000000246 R12: 0000562b6f0a11f0 R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000 ------------[ cut here ]------------ Acked-by: Zhenyu Wang <[email protected]> Signed-off-by: Colin Xu <[email protected]> Signed-off-by: Zhenyu Wang <[email protected]> Link: http://patchwork.freedesktop.org/patch/msgid/[email protected]
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
devm_gpiod_get_index() doesn't return NULL but -ENOENT when the requested GPIO doesn't exist, leading to the following messages: [ 2.742468] gpiod_direction_input: invalid GPIO (errorpointer) [ 2.748147] can't set direction for gpio #2: -2 [ 2.753081] gpiod_direction_input: invalid GPIO (errorpointer) [ 2.758724] can't set direction for gpio #3: -2 [ 2.763666] gpiod_direction_output: invalid GPIO (errorpointer) [ 2.769394] can't set direction for gpio #4: -2 [ 2.774341] gpiod_direction_input: invalid GPIO (errorpointer) [ 2.779981] can't set direction for gpio #5: -2 [ 2.784545] ff000a20.serial: ttyCPM1 at MMIO 0xfff00a20 (irq = 39, base_baud = 8250000) is a CPM UART Use devm_gpiod_get_index_optional() instead. At the same time, handle the error case and properly exit with an error. Fixes: 97cbaf2 ("tty: serial: cpm_uart: Convert to use GPIO descriptors") Cc: [email protected] Cc: Linus Walleij <[email protected]> Signed-off-by: Christophe Leroy <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Link: https://lore.kernel.org/r/694a25fdce548c5ee8b060ef6a4b02746b8f25c0.1591986307.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
Jakub Sitnicki says: ==================== This patch set prepares ground for link-based multi-prog attachment for future netns attach types, with BPF_SK_LOOKUP attach type in mind [0]. Two changes are needed in order to attach and run a series of BPF programs: 1) an bpf_prog_array of programs to run (patch #2), and 2) a list of attached links to keep track of attachments (patch #3). Nothing changes for BPF flow_dissector. Just as before only one program can be attached to netns. In v3 I've simplified patch #2 that introduces bpf_prog_array to take advantage of the fact that it will hold at most one program for now. In particular, I'm no longer using bpf_prog_array_copy. It turned out to be less suitable for link operations than I thought as it fails to append the same BPF program. bpf_prog_array_replace_item is also gone, because we know we always want to replace the first element in prog_array. Naturally the code that handles bpf_prog_array will need change once more when there is a program type that allows multi-prog attachment. But I feel it will be better to do it gradually and present it together with tests that actually exercise multi-prog code paths. [0] https://lore.kernel.org/bpf/[email protected]/ v2 -> v3: - Don't check if run_array is null in link update callback. (Martin) - Allow updating the link with the same BPF program. (Andrii) - Add patch #4 with a test for the above case. - Kill bpf_prog_array_replace_item. Access the run_array directly. - Switch from bpf_prog_array_copy() to bpf_prog_array_alloc(1, ...). - Replace rcu_deref_protected & RCU_INIT_POINTER with rcu_replace_pointer. - Drop Andrii's Ack from patch #2. Code changed. v1 -> v2: - Show with a (void) cast that bpf_prog_array_replace_item() return value is ignored on purpose. (Andrii) - Explain why bpf-cgroup cannot replace programs in bpf_prog_array based on bpf_prog pointer comparison in patch #2 description. (Andrii) ==================== Signed-off-by: Alexei Starovoitov <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
…kernel/git/kvmarm/kvmarm into kvm-master KVM/arm fixes for 5.8, take #2 - Make sure a vcpu becoming non-resident doesn't race against the doorbell delivery - Only advertise pvtime if accounting is enabled - Return the correct error code if reset fails with SVE - Make sure that pseudo-NMI functions are annotated as __always_inline
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
In BRM_status_show(), if the condition "!ioc->is_warpdrive" tested on entry to the function is true, a "goto out" is called. This results in unlocking ioc->pci_access_mutex without this mutex lock being taken. This generates the following splat: [ 1148.539883] mpt3sas_cm2: BRM_status_show: BRM attribute is only for warpdrive [ 1148.547184] [ 1148.548708] ===================================== [ 1148.553501] WARNING: bad unlock balance detected! [ 1148.558277] 5.8.0-rc3+ torvalds#827 Not tainted [ 1148.562183] ------------------------------------- [ 1148.566959] cat/5008 is trying to release lock (&ioc->pci_access_mutex) at: [ 1148.574035] [<ffffffffc070b7a3>] BRM_status_show+0xd3/0x100 [mpt3sas] [ 1148.580574] but there are no more locks to release! [ 1148.585524] [ 1148.585524] other info that might help us debug this: [ 1148.599624] 3 locks held by cat/5008: [ 1148.607085] #0: ffff92aea3e392c0 (&p->lock){+.+.}-{3:3}, at: seq_read+0x34/0x480 [ 1148.618509] #1: ffff922ef14c4888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x2a/0xb0 [ 1148.630729] #2: ffff92aedb5d7310 (kn->active#224){.+.+}-{0:0}, at: kernfs_seq_start+0x32/0xb0 [ 1148.643347] [ 1148.643347] stack backtrace: [ 1148.655259] CPU: 73 PID: 5008 Comm: cat Not tainted 5.8.0-rc3+ torvalds#827 [ 1148.665309] Hardware name: HGST H4060-S/S2600STB, BIOS SE5C620.86B.02.01.0008.031920191559 03/19/2019 [ 1148.678394] Call Trace: [ 1148.684750] dump_stack+0x78/0xa0 [ 1148.691802] lock_release.cold+0x45/0x4a [ 1148.699451] __mutex_unlock_slowpath+0x35/0x270 [ 1148.707675] BRM_status_show+0xd3/0x100 [mpt3sas] [ 1148.716092] dev_attr_show+0x19/0x40 [ 1148.723664] sysfs_kf_seq_show+0x87/0x100 [ 1148.731193] seq_read+0xbc/0x480 [ 1148.737882] vfs_read+0xa0/0x160 [ 1148.744514] ksys_read+0x58/0xd0 [ 1148.751129] do_syscall_64+0x4c/0xa0 [ 1148.757941] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 1148.766240] RIP: 0033:0x7f1230566542 [ 1148.772957] Code: Bad RIP value. [ 1148.779206] RSP: 002b:00007ffeac1bcac8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 1148.790063] RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007f1230566542 [ 1148.800284] RDX: 0000000000020000 RSI: 00007f1223460000 RDI: 0000000000000003 [ 1148.810474] RBP: 00007f1223460000 R08: 00007f122345f010 R09: 0000000000000000 [ 1148.820641] R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000000000 [ 1148.830728] R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000 Fix this by returning immediately instead of jumping to the out label. Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Johannes Thumshirn <[email protected]> Acked-by: Sreekanth Reddy <[email protected]> Signed-off-by: Damien Le Moal <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
In pci_disable_sriov(), i.e., # echo 0 > /sys/class/net/enp11s0f1np1/device/sriov_numvfs iommu_release_device iommu_group_remove_device arm_smmu_domain_free kfree(smmu_domain) Later, iommu_release_device arm_smmu_release_device arm_smmu_detach_dev spin_lock_irqsave(&smmu_domain->devices_lock, would trigger an use-after-free. Fixed it by call arm_smmu_release_device() first before iommu_group_remove_device(). BUG: KASAN: use-after-free in __lock_acquire+0x3458/0x4440 __lock_acquire at kernel/locking/lockdep.c:4250 Read of size 8 at addr ffff0089df1a6f68 by task bash/3356 CPU: 5 PID: 3356 Comm: bash Not tainted 5.8.0-rc3-next-20200630 #2 Hardware name: HPE Apollo 70 /C01_APACHE_MB , BIOS L50_5.13_1.11 06/18/2019 Call trace: dump_backtrace+0x0/0x398 show_stack+0x14/0x20 dump_stack+0x140/0x1b8 print_address_description.isra.12+0x54/0x4a8 kasan_report+0x134/0x1b8 __asan_report_load8_noabort+0x2c/0x50 __lock_acquire+0x3458/0x4440 lock_acquire+0x204/0xf10 _raw_spin_lock_irqsave+0xf8/0x180 arm_smmu_detach_dev+0xd8/0x4a0 arm_smmu_detach_dev at drivers/iommu/arm-smmu-v3.c:2776 arm_smmu_release_device+0xb4/0x1c8 arm_smmu_disable_pasid at drivers/iommu/arm-smmu-v3.c:2754 (inlined by) arm_smmu_release_device at drivers/iommu/arm-smmu-v3.c:3000 iommu_release_device+0xc0/0x178 iommu_release_device at drivers/iommu/iommu.c:302 iommu_bus_notifier+0x118/0x160 notifier_call_chain+0xa4/0x128 __blocking_notifier_call_chain+0x70/0xa8 blocking_notifier_call_chain+0x14/0x20 device_del+0x618/0xa00 pci_remove_bus_device+0x108/0x2d8 pci_stop_and_remove_bus_device+0x1c/0x28 pci_iov_remove_virtfn+0x228/0x368 sriov_disable+0x8c/0x348 pci_disable_sriov+0x5c/0x70 mlx5_core_sriov_configure+0xd8/0x260 [mlx5_core] sriov_numvfs_store+0x240/0x318 dev_attr_store+0x38/0x68 sysfs_kf_write+0xdc/0x128 kernfs_fop_write+0x23c/0x448 __vfs_write+0x54/0xe8 vfs_write+0x124/0x3f0 ksys_write+0xe8/0x1b8 __arm64_sys_write+0x68/0x98 do_el0_svc+0x124/0x220 el0_sync_handler+0x260/0x408 el0_sync+0x140/0x180 Allocated by task 3356: save_stack+0x24/0x50 __kasan_kmalloc.isra.13+0xc4/0xe0 kasan_kmalloc+0xc/0x18 kmem_cache_alloc_trace+0x1ec/0x318 arm_smmu_domain_alloc+0x54/0x148 iommu_group_alloc_default_domain+0xc0/0x440 iommu_probe_device+0x1c0/0x308 iort_iommu_configure+0x434/0x518 acpi_dma_configure+0xf0/0x128 pci_dma_configure+0x114/0x160 really_probe+0x124/0x6d8 driver_probe_device+0xc4/0x180 __device_attach_driver+0x184/0x1e8 bus_for_each_drv+0x114/0x1a0 __device_attach+0x19c/0x2a8 device_attach+0x10/0x18 pci_bus_add_device+0x70/0xf8 pci_iov_add_virtfn+0x7b4/0xb40 sriov_enable+0x5c8/0xc30 pci_enable_sriov+0x64/0x80 mlx5_core_sriov_configure+0x58/0x260 [mlx5_core] sriov_numvfs_store+0x1c0/0x318 dev_attr_store+0x38/0x68 sysfs_kf_write+0xdc/0x128 kernfs_fop_write+0x23c/0x448 __vfs_write+0x54/0xe8 vfs_write+0x124/0x3f0 ksys_write+0xe8/0x1b8 __arm64_sys_write+0x68/0x98 do_el0_svc+0x124/0x220 el0_sync_handler+0x260/0x408 el0_sync+0x140/0x180 Freed by task 3356: save_stack+0x24/0x50 __kasan_slab_free+0x124/0x198 kasan_slab_free+0x10/0x18 slab_free_freelist_hook+0x110/0x298 kfree+0x128/0x668 arm_smmu_domain_free+0xf4/0x1a0 iommu_group_release+0xec/0x160 kobject_put+0xf4/0x238 kobject_del+0x110/0x190 kobject_put+0x1e4/0x238 iommu_group_remove_device+0x394/0x938 iommu_release_device+0x9c/0x178 iommu_release_device at drivers/iommu/iommu.c:300 iommu_bus_notifier+0x118/0x160 notifier_call_chain+0xa4/0x128 __blocking_notifier_call_chain+0x70/0xa8 blocking_notifier_call_chain+0x14/0x20 device_del+0x618/0xa00 pci_remove_bus_device+0x108/0x2d8 pci_stop_and_remove_bus_device+0x1c/0x28 pci_iov_remove_virtfn+0x228/0x368 sriov_disable+0x8c/0x348 pci_disable_sriov+0x5c/0x70 mlx5_core_sriov_configure+0xd8/0x260 [mlx5_core] sriov_numvfs_store+0x240/0x318 dev_attr_store+0x38/0x68 sysfs_kf_write+0xdc/0x128 kernfs_fop_write+0x23c/0x448 __vfs_write+0x54/0xe8 vfs_write+0x124/0x3f0 ksys_write+0xe8/0x1b8 __arm64_sys_write+0x68/0x98 do_el0_svc+0x124/0x220 el0_sync_handler+0x260/0x408 el0_sync+0x140/0x180 The buggy address belongs to the object at ffff0089df1a6e00 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 360 bytes inside of 512-byte region [ffff0089df1a6e00, ffff0089df1a7000) The buggy address belongs to the page: page:ffffffe02257c680 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff0089df1a1400 flags: 0x7ffff800000200(slab) raw: 007ffff800000200 ffffffe02246b8c8 ffffffe02257ff88 ffff000000320680 raw: ffff0089df1a1400 00000000002a000e 00000001ffffffff ffff0089df1a5001 page dumped because: kasan: bad access detected page->mem_cgroup:ffff0089df1a5001 Memory state around the buggy address: ffff0089df1a6e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff0089df1a6e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff0089df1a6f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff0089df1a6f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff0089df1a7000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc Fixes: a6a4c7e ("iommu: Add probe_device() and release_device() call-backs") Signed-off-by: Qian Cai <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Aug 10, 2020
Ido Schimmel says: ==================== mlxsw: Various fixes Fix two issues found by syzkaller. Patch #1 removes inappropriate usage of WARN_ON() following memory allocation failure. Constantly triggered when syzkaller injects faults. Patch #2 fixes a use-after-free that can be triggered by 'devlink dev info' following a failed devlink reload. ==================== Signed-off-by: David S. Miller <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Sep 18, 2020
Allocating memory with regulator_list_mutex held makes lockdep unhappy when memory pressure makes the system do fs_reclaim on eg. eMMC using a regulator. Push the lock inside regulator_init_coupling() after the allocation. ====================================================== WARNING: possible circular locking dependency detected 5.7.13+ torvalds#533 Not tainted ------------------------------------------------------ kswapd0/383 is trying to acquire lock: cca78ca4 (&sbi->write_io[i][j].io_rwsem){++++}-{3:3}, at: __submit_merged_write_cond+0x104/0x154 but task is already holding lock: c0e38518 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x0/0x50 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (fs_reclaim){+.+.}-{0:0}: fs_reclaim_acquire.part.11+0x40/0x50 fs_reclaim_acquire+0x24/0x28 __kmalloc+0x54/0x218 regulator_register+0x860/0x1584 dummy_regulator_probe+0x60/0xa8 [...] other info that might help us debug this: Chain exists of: &sbi->write_io[i][j].io_rwsem --> regulator_list_mutex --> fs_reclaim Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(regulator_list_mutex); lock(fs_reclaim); lock(&sbi->write_io[i][j].io_rwsem); *** DEADLOCK *** 1 lock held by kswapd0/383: #0: c0e38518 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x0/0x50 [...] Fixes: d8ca7d1 ("regulator: core: Introduce API for regulators coupling customization") Signed-off-by: Michał Mirosław <[email protected]> Reviewed-by: Dmitry Osipenko <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/1a889cf7f61c6429c9e6b34ddcdde99be77a26b6.1597195321.git.mirq-linux@rere.qmqm.pl Signed-off-by: Mark Brown <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Sep 18, 2020
Nikolay reported a lockdep splat in generic/476 that I could reproduce with btrfs/187. ====================================================== WARNING: possible circular locking dependency detected 5.9.0-rc2+ #1 Tainted: G W ------------------------------------------------------ kswapd0/100 is trying to acquire lock: ffff9e8ef38b6268 (&delayed_node->mutex){+.+.}-{3:3}, at: __btrfs_release_delayed_node.part.0+0x3f/0x330 but task is already holding lock: ffffffffa9d74700 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (fs_reclaim){+.+.}-{0:0}: fs_reclaim_acquire+0x65/0x80 slab_pre_alloc_hook.constprop.0+0x20/0x200 kmem_cache_alloc_trace+0x3a/0x1a0 btrfs_alloc_device+0x43/0x210 add_missing_dev+0x20/0x90 read_one_chunk+0x301/0x430 btrfs_read_sys_array+0x17b/0x1b0 open_ctree+0xa62/0x1896 btrfs_mount_root.cold+0x12/0xea legacy_get_tree+0x30/0x50 vfs_get_tree+0x28/0xc0 vfs_kern_mount.part.0+0x71/0xb0 btrfs_mount+0x10d/0x379 legacy_get_tree+0x30/0x50 vfs_get_tree+0x28/0xc0 path_mount+0x434/0xc00 __x64_sys_mount+0xe3/0x120 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 -> #1 (&fs_info->chunk_mutex){+.+.}-{3:3}: __mutex_lock+0x7e/0x7e0 btrfs_chunk_alloc+0x125/0x3a0 find_free_extent+0xdf6/0x1210 btrfs_reserve_extent+0xb3/0x1b0 btrfs_alloc_tree_block+0xb0/0x310 alloc_tree_block_no_bg_flush+0x4a/0x60 __btrfs_cow_block+0x11a/0x530 btrfs_cow_block+0x104/0x220 btrfs_search_slot+0x52e/0x9d0 btrfs_lookup_inode+0x2a/0x8f __btrfs_update_delayed_inode+0x80/0x240 btrfs_commit_inode_delayed_inode+0x119/0x120 btrfs_evict_inode+0x357/0x500 evict+0xcf/0x1f0 vfs_rmdir.part.0+0x149/0x160 do_rmdir+0x136/0x1a0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 -> #0 (&delayed_node->mutex){+.+.}-{3:3}: __lock_acquire+0x1184/0x1fa0 lock_acquire+0xa4/0x3d0 __mutex_lock+0x7e/0x7e0 __btrfs_release_delayed_node.part.0+0x3f/0x330 btrfs_evict_inode+0x24c/0x500 evict+0xcf/0x1f0 dispose_list+0x48/0x70 prune_icache_sb+0x44/0x50 super_cache_scan+0x161/0x1e0 do_shrink_slab+0x178/0x3c0 shrink_slab+0x17c/0x290 shrink_node+0x2b2/0x6d0 balance_pgdat+0x30a/0x670 kswapd+0x213/0x4c0 kthread+0x138/0x160 ret_from_fork+0x1f/0x30 other info that might help us debug this: Chain exists of: &delayed_node->mutex --> &fs_info->chunk_mutex --> fs_reclaim Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&fs_info->chunk_mutex); lock(fs_reclaim); lock(&delayed_node->mutex); *** DEADLOCK *** 3 locks held by kswapd0/100: #0: ffffffffa9d74700 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30 #1: ffffffffa9d65c50 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x115/0x290 #2: ffff9e8e9da260e0 (&type->s_umount_key#48){++++}-{3:3}, at: super_cache_scan+0x38/0x1e0 stack backtrace: CPU: 1 PID: 100 Comm: kswapd0 Tainted: G W 5.9.0-rc2+ #1 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014 Call Trace: dump_stack+0x92/0xc8 check_noncircular+0x12d/0x150 __lock_acquire+0x1184/0x1fa0 lock_acquire+0xa4/0x3d0 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 __mutex_lock+0x7e/0x7e0 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 ? lock_acquire+0xa4/0x3d0 ? btrfs_evict_inode+0x11e/0x500 ? find_held_lock+0x2b/0x80 __btrfs_release_delayed_node.part.0+0x3f/0x330 btrfs_evict_inode+0x24c/0x500 evict+0xcf/0x1f0 dispose_list+0x48/0x70 prune_icache_sb+0x44/0x50 super_cache_scan+0x161/0x1e0 do_shrink_slab+0x178/0x3c0 shrink_slab+0x17c/0x290 shrink_node+0x2b2/0x6d0 balance_pgdat+0x30a/0x670 kswapd+0x213/0x4c0 ? _raw_spin_unlock_irqrestore+0x46/0x60 ? add_wait_queue_exclusive+0x70/0x70 ? balance_pgdat+0x670/0x670 kthread+0x138/0x160 ? kthread_create_worker_on_cpu+0x40/0x40 ret_from_fork+0x1f/0x30 This is because we are holding the chunk_mutex when we call btrfs_alloc_device, which does a GFP_KERNEL allocation. We don't want to switch that to a GFP_NOFS lock because this is the only place where it matters. So instead use memalloc_nofs_save() around the allocation in order to avoid the lockdep splat. Reported-by: Nikolay Borisov <[email protected]> CC: [email protected] # 4.4+ Reviewed-by: Anand Jain <[email protected]> Signed-off-by: Josef Bacik <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
Like other tunneling interfaces, the bareudp doesn't need TXLOCK. So, It is good to set the NETIF_F_LLTX flag to improve performance and to avoid lockdep's false-positive warning. Test commands: ip netns add A ip netns add B ip link add veth0 netns A type veth peer name veth1 netns B ip netns exec A ip link set veth0 up ip netns exec A ip a a 10.0.0.1/24 dev veth0 ip netns exec B ip link set veth1 up ip netns exec B ip a a 10.0.0.2/24 dev veth1 for i in {2..1} do let A=$i-1 ip netns exec A ip link add bareudp$i type bareudp \ dstport $i ethertype ip ip netns exec A ip link set bareudp$i up ip netns exec A ip a a 10.0.$i.1/24 dev bareudp$i ip netns exec A ip r a 10.0.$i.2 encap ip src 10.0.$A.1 \ dst 10.0.$A.2 via 10.0.$i.2 dev bareudp$i ip netns exec B ip link add bareudp$i type bareudp \ dstport $i ethertype ip ip netns exec B ip link set bareudp$i up ip netns exec B ip a a 10.0.$i.2/24 dev bareudp$i ip netns exec B ip r a 10.0.$i.1 encap ip src 10.0.$A.2 \ dst 10.0.$A.1 via 10.0.$i.1 dev bareudp$i done ip netns exec A ping 10.0.2.2 Splat looks like: [ 96.992803][ T822] ============================================ [ 96.993954][ T822] WARNING: possible recursive locking detected [ 96.995102][ T822] 5.10.0+ torvalds#819 Not tainted [ 96.995927][ T822] -------------------------------------------- [ 96.997091][ T822] ping/822 is trying to acquire lock: [ 96.998083][ T822] ffff88810f753898 (_xmit_NONE#2){+.-.}-{2:2}, at: __dev_queue_xmit+0x1f52/0x2960 [ 96.999813][ T822] [ 96.999813][ T822] but task is already holding lock: [ 97.001192][ T822] ffff88810c385498 (_xmit_NONE#2){+.-.}-{2:2}, at: __dev_queue_xmit+0x1f52/0x2960 [ 97.002908][ T822] [ 97.002908][ T822] other info that might help us debug this: [ 97.004401][ T822] Possible unsafe locking scenario: [ 97.004401][ T822] [ 97.005784][ T822] CPU0 [ 97.006407][ T822] ---- [ 97.007010][ T822] lock(_xmit_NONE#2); [ 97.007779][ T822] lock(_xmit_NONE#2); [ 97.008550][ T822] [ 97.008550][ T822] *** DEADLOCK *** [ 97.008550][ T822] [ 97.010057][ T822] May be due to missing lock nesting notation [ 97.010057][ T822] [ 97.011594][ T822] 7 locks held by ping/822: [ 97.012426][ T822] #0: ffff888109a144f0 (sk_lock-AF_INET){+.+.}-{0:0}, at: raw_sendmsg+0x12f7/0x2b00 [ 97.014191][ T822] #1: ffffffffbce2f5a0 (rcu_read_lock_bh){....}-{1:2}, at: ip_finish_output2+0x249/0x2020 [ 97.016045][ T822] #2: ffffffffbce2f5a0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1fd/0x2960 [ 97.017897][ T822] #3: ffff88810c385498 (_xmit_NONE#2){+.-.}-{2:2}, at: __dev_queue_xmit+0x1f52/0x2960 [ 97.019684][ T822] #4: ffffffffbce2f600 (rcu_read_lock){....}-{1:2}, at: bareudp_xmit+0x31b/0x3690 [bareudp] [ 97.021573][ T822] #5: ffffffffbce2f5a0 (rcu_read_lock_bh){....}-{1:2}, at: ip_finish_output2+0x249/0x2020 [ 97.023424][ T822] #6: ffffffffbce2f5a0 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1fd/0x2960 [ 97.025259][ T822] [ 97.025259][ T822] stack backtrace: [ 97.026349][ T822] CPU: 3 PID: 822 Comm: ping Not tainted 5.10.0+ torvalds#819 [ 97.027609][ T822] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 [ 97.029407][ T822] Call Trace: [ 97.030015][ T822] dump_stack+0x99/0xcb [ 97.030783][ T822] __lock_acquire.cold.77+0x149/0x3a9 [ 97.031773][ T822] ? stack_trace_save+0x81/0xa0 [ 97.032661][ T822] ? register_lock_class+0x1910/0x1910 [ 97.033673][ T822] ? register_lock_class+0x1910/0x1910 [ 97.034679][ T822] ? rcu_read_lock_sched_held+0x91/0xc0 [ 97.035697][ T822] ? rcu_read_lock_bh_held+0xa0/0xa0 [ 97.036690][ T822] lock_acquire+0x1b2/0x730 [ 97.037515][ T822] ? __dev_queue_xmit+0x1f52/0x2960 [ 97.038466][ T822] ? check_flags+0x50/0x50 [ 97.039277][ T822] ? netif_skb_features+0x296/0x9c0 [ 97.040226][ T822] ? validate_xmit_skb+0x29/0xb10 [ 97.041151][ T822] _raw_spin_lock+0x30/0x70 [ 97.041977][ T822] ? __dev_queue_xmit+0x1f52/0x2960 [ 97.042927][ T822] __dev_queue_xmit+0x1f52/0x2960 [ 97.043852][ T822] ? netdev_core_pick_tx+0x290/0x290 [ 97.044824][ T822] ? mark_held_locks+0xb7/0x120 [ 97.045712][ T822] ? lockdep_hardirqs_on_prepare+0x12c/0x3e0 [ 97.046824][ T822] ? __local_bh_enable_ip+0xa5/0xf0 [ 97.047771][ T822] ? ___neigh_create+0x12a8/0x1eb0 [ 97.048710][ T822] ? trace_hardirqs_on+0x41/0x120 [ 97.049626][ T822] ? ___neigh_create+0x12a8/0x1eb0 [ 97.050556][ T822] ? __local_bh_enable_ip+0xa5/0xf0 [ 97.051509][ T822] ? ___neigh_create+0x12a8/0x1eb0 [ 97.052443][ T822] ? check_chain_key+0x244/0x5f0 [ 97.053352][ T822] ? rcu_read_lock_bh_held+0x56/0xa0 [ 97.054317][ T822] ? ip_finish_output2+0x6ea/0x2020 [ 97.055263][ T822] ? pneigh_lookup+0x410/0x410 [ 97.056135][ T822] ip_finish_output2+0x6ea/0x2020 [ ... ] Acked-by: Guillaume Nault <[email protected]> Fixes: 571912c ("net: UDP tunnel encapsulation module for tunnelling different protocols like MPLS, IP, NSH etc.") Signed-off-by: Taehee Yoo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
ASD and TA share the same firmware in SIENNA_CICHLID and only TA firmware is requested during boot, so only need release TA firmware when remove device. [ 83.877150] general protection fault, probably for non-canonical address 0x1269f97e6ed04095: 0000 [#1] SMP PTI [ 83.888076] CPU: 0 PID: 1312 Comm: modprobe Tainted: G W OE 5.9.0-rc5-deli-amd-vangogh-0.0.6.6-114-gdd99d5669a96-dirty #2 [ 83.901160] Hardware name: System manufacturer System Product Name/TUF Z370-PLUS GAMING II, BIOS 0411 09/21/2018 [ 83.912353] RIP: 0010:free_fw_priv+0xc/0x120 [ 83.917531] Code: e8 99 cd b0 ff b8 a1 ff ff ff eb 9f 4c 89 f7 e8 8a cd b0 ff b8 f4 ff ff ff eb 90 0f 1f 00 0f 1f 44 00 00 55 48 89 e5 41 54 53 <4c> 8b 67 18 48 89 fb 4c 89 e7 e8 45 94 41 00 b8 ff ff ff ff f0 0f [ 83.937576] RSP: 0018:ffffbc34c13a3ce0 EFLAGS: 00010206 [ 83.943699] RAX: ffffffffbb681850 RBX: ffffa047f117eb60 RCX: 0000000080800055 [ 83.951879] RDX: ffffbc34c1d5f000 RSI: 0000000080800055 RDI: 1269f97e6ed04095 [ 83.959955] RBP: ffffbc34c13a3cf0 R08: 0000000000000000 R09: 0000000000000001 [ 83.968107] R10: ffffbc34c13a3cc8 R11: 00000000ffffff00 R12: ffffa047d6b23378 [ 83.976166] R13: ffffa047d6b23338 R14: ffffa047d6b240c8 R15: 0000000000000000 [ 83.984295] FS: 00007f74f6712540(0000) GS:ffffa047fbe00000(0000) knlGS:0000000000000000 [ 83.993323] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 84.000056] CR2: 0000556a1cca4e18 CR3: 000000021faa8004 CR4: 00000000003706f0 [ 84.008128] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 84.016155] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 84.024174] Call Trace: [ 84.027514] release_firmware.part.11+0x4b/0x70 [ 84.033017] release_firmware+0x13/0x20 [ 84.037803] psp_sw_fini+0x77/0xb0 [amdgpu] [ 84.042857] amdgpu_device_fini+0x38c/0x5d0 [amdgpu] [ 84.048815] amdgpu_driver_unload_kms+0x43/0x70 [amdgpu] [ 84.055055] drm_dev_unregister+0x73/0xb0 [drm] [ 84.060499] drm_dev_unplug+0x28/0x30 [drm] [ 84.065598] amdgpu_dev_uninit+0x1b/0x40 [amdgpu] [ 84.071223] amdgpu_pci_remove+0x4e/0x70 [amdgpu] [ 84.076835] pci_device_remove+0x3e/0xc0 [ 84.081609] device_release_driver_internal+0xfb/0x1c0 [ 84.087558] driver_detach+0x4d/0xa0 [ 84.092041] bus_remove_driver+0x5f/0xe0 [ 84.096854] driver_unregister+0x2f/0x50 [ 84.101594] pci_unregister_driver+0x22/0xa0 [ 84.106806] amdgpu_exit+0x15/0x2b [amdgpu] Signed-off-by: Dennis Li <[email protected]> Reviewed-by: Hawking Zhang <[email protected]> Signed-off-by: Alex Deucher <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
Ido Schimmel says: ==================== nexthop: Various fixes This series contains various fixes for the nexthop code. The bugs were uncovered during the development of resilient nexthop groups. Patches #1-#2 fix the error path of nexthop_create_group(). I was not able to trigger these bugs with current code, but it is possible with the upcoming resilient nexthop groups code which adds a user controllable memory allocation further in the function. Patch #3 fixes wrong validation of netlink attributes. Patch #4 fixes wrong invocation of mausezahn in a selftest. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
We had kernel panic, it is caused by unload module and last close confirmation. call trace: [1196029.743127] free_sess+0x15/0x50 [rtrs_client] [1196029.743128] rtrs_clt_close+0x4c/0x70 [rtrs_client] [1196029.743129] ? rnbd_clt_unmap_device+0x1b0/0x1b0 [rnbd_client] [1196029.743130] close_rtrs+0x25/0x50 [rnbd_client] [1196029.743131] rnbd_client_exit+0x93/0xb99 [rnbd_client] [1196029.743132] __x64_sys_delete_module+0x190/0x260 And in the crashdump confirmation kworker is also running. PID: 6943 TASK: ffff9e2ac8098000 CPU: 4 COMMAND: "kworker/4:2" #0 [ffffb206cf337c30] __schedule at ffffffff9f93f891 #1 [ffffb206cf337cc8] schedule at ffffffff9f93fe98 #2 [ffffb206cf337cd0] schedule_timeout at ffffffff9f943938 #3 [ffffb206cf337d50] wait_for_completion at ffffffff9f9410a7 #4 [ffffb206cf337da0] __flush_work at ffffffff9f08ce0e #5 [ffffb206cf337e20] rtrs_clt_close_conns at ffffffffc0d5f668 [rtrs_client] #6 [ffffb206cf337e48] rtrs_clt_close at ffffffffc0d5f801 [rtrs_client] linux-kernel-labs#7 [ffffb206cf337e68] close_rtrs at ffffffffc0d26255 [rnbd_client] linux-kernel-labs#8 [ffffb206cf337e78] free_sess at ffffffffc0d262ad [rnbd_client] linux-kernel-labs#9 [ffffb206cf337e88] rnbd_clt_put_dev at ffffffffc0d266a7 [rnbd_client] The problem is both code path try to close same session, which lead to panic. To fix it, just skip the sess if the refcount already drop to 0. Fixes: f7a7a5c ("block/rnbd: client: main functionality") Signed-off-by: Jack Wang <[email protected]> Reviewed-by: Gioh Kim <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
The buffer list can have zero skb as following path: tipc_named_node_up()->tipc_node_xmit()->tipc_link_xmit(), so we need to check the list before casting an &sk_buff. Fault report: [] tipc: Bulk publication failure [] general protection fault, probably for non-canonical [#1] PREEMPT [...] [] KASAN: null-ptr-deref in range [0x00000000000000c8-0x00000000000000cf] [] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Not tainted 5.10.0-rc4+ #2 [] Hardware name: Bochs ..., BIOS Bochs 01/01/2011 [] RIP: 0010:tipc_link_xmit+0xc1/0x2180 [] Code: 24 b8 00 00 00 00 4d 39 ec 4c 0f 44 e8 e8 d7 0a 10 f9 48 [...] [] RSP: 0018:ffffc90000006ea0 EFLAGS: 00010202 [] RAX: dffffc0000000000 RBX: ffff8880224da000 RCX: 1ffff11003d3cc0d [] RDX: 0000000000000019 RSI: ffffffff886007b9 RDI: 00000000000000c8 [] RBP: ffffc90000007018 R08: 0000000000000001 R09: fffff52000000ded [] R10: 0000000000000003 R11: fffff52000000dec R12: ffffc90000007148 [] R13: 0000000000000000 R14: 0000000000000000 R15: ffffc90000007018 [] FS: 0000000000000000(0000) GS:ffff888037400000(0000) knlGS:000[...] [] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [] CR2: 00007fffd2db5000 CR3: 000000002b08f000 CR4: 00000000000006f0 Fixes: af9b028 ("tipc: make media xmit call outside node spinlock context") Acked-by: Jon Maloy <[email protected]> Signed-off-by: Hoang Le <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
Feb 3, 2021
Ido Schimmel says: ==================== mlxsw: core: Thermal control fixes This series includes two fixes for thermal control in mlxsw. Patch #1 validates that the alarm temperature threshold read from a transceiver is above the warning temperature threshold. If not, the current thresholds are maintained. It was observed that some transceiver might be unreliable and sometimes report a too low alarm temperature threshold which would result in thermal shutdown of the system. Patch #2 increases the temperature threshold above which thermal shutdown is triggered for the ASIC thermal zone. It is currently too low and might result in thermal shutdown under perfectly fine operational conditions. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
dbaluta
pushed a commit
that referenced
this pull request
May 3, 2022
…e_zone btrfs_can_activate_zone() can be called with the device_list_mutex already held, which will lead to a deadlock: insert_dev_extents() // Takes device_list_mutex `-> insert_dev_extent() `-> btrfs_insert_empty_item() `-> btrfs_insert_empty_items() `-> btrfs_search_slot() `-> btrfs_cow_block() `-> __btrfs_cow_block() `-> btrfs_alloc_tree_block() `-> btrfs_reserve_extent() `-> find_free_extent() `-> find_free_extent_update_loop() `-> can_allocate_chunk() `-> btrfs_can_activate_zone() // Takes device_list_mutex again Instead of using the RCU on fs_devices->device_list we can use fs_devices->alloc_list, protected by the chunk_mutex to traverse the list of active devices. We are in the chunk allocation thread. The newer chunk allocation happens from the devices in the fs_device->alloc_list protected by the chunk_mutex. btrfs_create_chunk() lockdep_assert_held(&info->chunk_mutex); gather_device_info list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) Also, a device that reappears after the mount won't join the alloc_list yet and, it will be in the dev_list, which we don't want to consider in the context of the chunk alloc. [15.166572] WARNING: possible recursive locking detected [15.167117] 5.17.0-rc6-dennis linux-kernel-labs#79 Not tainted [15.167487] -------------------------------------------- [15.167733] kworker/u8:3/146 is trying to acquire lock: [15.167733] ffff888102962ee0 (&fs_devs->device_list_mutex){+.+.}-{3:3}, at: find_free_extent+0x15a/0x14f0 [btrfs] [15.167733] [15.167733] but task is already holding lock: [15.167733] ffff888102962ee0 (&fs_devs->device_list_mutex){+.+.}-{3:3}, at: btrfs_create_pending_block_groups+0x20a/0x560 [btrfs] [15.167733] [15.167733] other info that might help us debug this: [15.167733] Possible unsafe locking scenario: [15.167733] [15.171834] CPU0 [15.171834] ---- [15.171834] lock(&fs_devs->device_list_mutex); [15.171834] lock(&fs_devs->device_list_mutex); [15.171834] [15.171834] *** DEADLOCK *** [15.171834] [15.171834] May be due to missing lock nesting notation [15.171834] [15.171834] 5 locks held by kworker/u8:3/146: [15.171834] #0: ffff888100050938 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1c3/0x5a0 [15.171834] #1: ffffc9000067be80 ((work_completion)(&fs_info->async_data_reclaim_work)){+.+.}-{0:0}, at: process_one_work+0x1c3/0x5a0 [15.176244] #2: ffff88810521e620 (sb_internal){.+.+}-{0:0}, at: flush_space+0x335/0x600 [btrfs] [15.176244] #3: ffff888102962ee0 (&fs_devs->device_list_mutex){+.+.}-{3:3}, at: btrfs_create_pending_block_groups+0x20a/0x560 [btrfs] [15.176244] #4: ffff8881152e4b78 (btrfs-dev-00){++++}-{3:3}, at: __btrfs_tree_lock+0x27/0x130 [btrfs] [15.179641] [15.179641] stack backtrace: [15.179641] CPU: 1 PID: 146 Comm: kworker/u8:3 Not tainted 5.17.0-rc6-dennis linux-kernel-labs#79 [15.179641] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1.fc35 04/01/2014 [15.179641] Workqueue: events_unbound btrfs_async_reclaim_data_space [btrfs] [15.179641] Call Trace: [15.179641] <TASK> [15.179641] dump_stack_lvl+0x45/0x59 [15.179641] __lock_acquire.cold+0x217/0x2b2 [15.179641] lock_acquire+0xbf/0x2b0 [15.183838] ? find_free_extent+0x15a/0x14f0 [btrfs] [15.183838] __mutex_lock+0x8e/0x970 [15.183838] ? find_free_extent+0x15a/0x14f0 [btrfs] [15.183838] ? find_free_extent+0x15a/0x14f0 [btrfs] [15.183838] ? lock_is_held_type+0xd7/0x130 [15.183838] ? find_free_extent+0x15a/0x14f0 [btrfs] [15.183838] find_free_extent+0x15a/0x14f0 [btrfs] [15.183838] ? _raw_spin_unlock+0x24/0x40 [15.183838] ? btrfs_get_alloc_profile+0x106/0x230 [btrfs] [15.187601] btrfs_reserve_extent+0x131/0x260 [btrfs] [15.187601] btrfs_alloc_tree_block+0xb5/0x3b0 [btrfs] [15.187601] __btrfs_cow_block+0x138/0x600 [btrfs] [15.187601] btrfs_cow_block+0x10f/0x230 [btrfs] [15.187601] btrfs_search_slot+0x55f/0xbc0 [btrfs] [15.187601] ? lock_is_held_type+0xd7/0x130 [15.187601] btrfs_insert_empty_items+0x2d/0x60 [btrfs] [15.187601] btrfs_create_pending_block_groups+0x2b3/0x560 [btrfs] [15.187601] __btrfs_end_transaction+0x36/0x2a0 [btrfs] [15.192037] flush_space+0x374/0x600 [btrfs] [15.192037] ? find_held_lock+0x2b/0x80 [15.192037] ? btrfs_async_reclaim_data_space+0x49/0x180 [btrfs] [15.192037] ? lock_release+0x131/0x2b0 [15.192037] btrfs_async_reclaim_data_space+0x70/0x180 [btrfs] [15.192037] process_one_work+0x24c/0x5a0 [15.192037] worker_thread+0x4a/0x3d0 Fixes: a85f05e ("btrfs: zoned: avoid chunk allocation if active block group has enough space") CC: [email protected] # 5.16+ Reviewed-by: Anand Jain <[email protected]> Signed-off-by: Johannes Thumshirn <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Daniel Baluta [email protected]