-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Demotion reloaded #2
Demotion reloaded #2
Conversation
To implement a new throttling policy for RT cgroups, the already existing mechanism is removed from rt.c. Signed-off-by: Luca Abeni <[email protected]> Cc: Tommaso Cucinotta <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Daniel Bristot de Oliveira <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Alessio Balsini <[email protected]>
The runtime of RT tasks controlled by CGroups are enforced by the SCHED_DEADLINE scheduling class, based on the runtime and period (the deadline is set equal to the period) parameters. sched_dl_entity may also represent a group of RT tasks, providing a rt_rq. Signed-off-by: Andrea Parri <[email protected]> Signed-off-by: Luca Abeni <[email protected]> Cc: Tommaso Cucinotta <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Daniel Bristot de Oliveira <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Alessio Balsini <[email protected]>
Add pointer to rt_rq used when a demoted task was demoted (before any migrations of the demoted task). This allows locking the correct rq for lists manipulation of the rt_se's cfs_throttle_task. Signed-off-by: Andres Oportus <[email protected]>
…ottled dl (rt group) tasks
/* | ||
if (running) | ||
put_prev_task(rq, p); | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I replied to the email, but I do not see replies here... So, here it is again:
this function is invoked by cfs_throttle_rt_tasks(), that is invoked by update_curr_rt().
Invoking put_prev_task() would result in another invocation of update_curr_rt(), potentially causing some issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Also, considering that put_prev_task_rt() would only enqueue the task in the pushable list (and we don't want that). It seems save to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I remember: I removed it due to some crash I was seeing, that I decided to be caused by infinite recursion (update_curr_rt -> cfs_throttle_rt_tasks -> __setprio_fifo -> put_prev_task_rt -> update_curr_rt -> cfs_throttle_rt_tasks -> ...)
enqueue_task(cpu_rq(cpu), p, ENQUEUE_REPLENISH | ENQUEUE_MOVE | ENQUEUE_RESTORE); | ||
|
||
check_class_changed(cpu_rq(cpu), p, prev_class, oldprio); | ||
out: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This label doesn't seem to be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uhmm... Right. I suspect is a leftover from some previous change; I am going to check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I checked:
your original patch contained an
-
if (p->sched_class == &rt_sched_class)
-
goto out;
near the beginning of __setprio_fifo().
Since I think that entering __setprio_fifo() with sched_class == rt_sched_class, I changed this to
-
BUG_ON(p->sched_class == &rt_sched_class);
but I forgot to remove the "out:" label.
On 22 June 2017 at 12:37, Juri Lelli ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In kernel/sched/core.c
<#2 (comment)>:
> + const struct sched_class *prev_class;
+
+ lockdep_assert_held(&rq->lock);
+
+ oldprio = p->prio;
+ prev_class = p->sched_class;
+ queued = task_on_rq_queued(p);
+ running = task_current(rq, p);
+ BUG_ON(!rt_throttled(p));
+
+ if (queued)
+ dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_MOVE);
+/*
+ if (running)
+ put_prev_task(rq, p);
+*/
Why is this commented out?
This is the code demoting a task from RT to CFS; it is invoked when the
runtime (of the dl_entity associated with the RT runqueue) becomes
negative. This is done by update_curr_rt(). Invoking put_prev_task() would
invoke update_curr_rt() again, resulting in potential issues.
… —
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (review)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEC-4fLGS8A7NjYaabOQDuElCymqQ7xjks5sGkPZgaJpZM4N_e0M>
.
|
So, periodic and periodic1 doesn't seem to have problems (anymore).
|
Uhm... This crash looks like the previous one... I am going to check if I
can reproduce it
…On 22 June 2017 at 14:21, Juri Lelli ***@***.***> wrote:
So, periodic and periodic1 doesn't seem to have problems (anymore).
But, periodic2 generates the following (when run for over 100 sec):
[ 147.659662] Unable to handle kernel NULL pointer dereference at virtual address 00000038
[ 147.667862] pgd = ffffff800a7ac000
[ 147.671300] [00000038] *pgd=000000007bffe003, *pud=000000007bffe003, *pmd=0000000000000000
[ 147.679683] Internal error: Oops: 96000006 [#1] PREEMPT SMP
[ 147.685326] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 147.694823] Hardware name: HiKey Development Board (DT)
[ 147.700106] task: ffffffc035156100 ti: ffffffc035158000 task.ti: ffffffc035158000
[ 147.707681] PC is at set_next_entity+0x2c/0x10a0
[ 147.712351] LR is at pick_next_task_fair+0xb0/0xd10
[ 147.717281] pc : [<ffffff800810a3d8>] lr : [<ffffff800811908c>] pstate: 600001c5
[ 147.724759] sp : ffffffc03515bd50
[ 147.728107] x29: ffffffc03515bd50 x28: ffffff8008d60428
[ 147.733491] x27: ffffff8008d60000 x26: ffffffc0794a6f80
[ 147.738873] x25: ffffffc035156700 x24: 0000000000000000
[ 147.744254] x23: ffffff8009854000 x22: ffffffc0794a6f98 [ 147.749209] CPU0: update max cpu_capacity 1024
[ 147.753970]
[ 147.755650] x21: ffffffc0794a7038 x20: ffffffc0794a6f80
[ 147.761028] x19: 0000000000000000 x18: 0000000000000000
[ 147.766405] x17: 0000000000000000 x16: 0000000000000000
[ 147.771783] x15: 0000000000000000 x14: 0000000000000000
[ 147.777160] x13: 0000000000000000 x12: 0000000034d5d91d
[ 147.782537] x11: ffffff8008d60420 x10: 0000000000000005
[ 147.787914] x9 : ffffff80098f7000 x8 : 0000000000000004
[ 147.793292] x7 : ffffff8008d40980 x6 : 0000000000000000
[ 147.798668] x5 : 0000000000000080 x4 : ffffff8008118fdc
[ 147.804044] x3 : 0000000000000001 x2 : ffffff80081069ec
[ 147.809421] x1 : 0000000000000000 x0 : ffffff800811908c
[ 147.814801]
[ 147.814801] SP: 0xffffffc03515bcd0:
[ 147.819817] bcd0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000 35156700 ffffffc0
[ 147.828134] bcf0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80 3515bd50 ffffffc0
[ 147.836448] bd10 0811908c ffffff80 3515bd50 ffffffc0 0810a3d8 ffffff80 600001c5 00000000
[ 147.844763] bd30 3515bd80 ffffffc0 081316d4 ffffff80 ffffffff ffffffff 35158000 ffffffc0
[ 147.853079] bd50 3515bde0 ffffffc0 0811908c ffffff80 00000000 00000000 794a6f80 ffffffc0
[ 147.861393] bd70 794a7038 ffffffc0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000
[ 147.869708] bd90 35156700 ffffffc0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80
[ 147.878024] bdb0 794a7038 ffffffc0 794a6f80 ffffffc0 794a7038 ffffffc0 794a6f98 ffffffc0
[ 147.886347]
[ 147.886347] X20: 0xffffffc0794a6f00:
[ 147.891451] 6f00 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.899767] 6f20 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.908082] 6f40 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.916397] 6f60 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.924713] 6f80 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0 099b0f60 ffffff80
[ 147.933028] 6fa0 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80 00000003 00000000
[ 147.941343] 6fc0 080fc5e8 ffffff80 00000001 00000000 00000000 00000000 00000000 00000000
[ 147.949658] 6fe0 00000004 00000000 00000010 00000000 0000001b 00000000 ffff6b0f 00000000
[ 147.957974]
[ 147.957974] X21: 0xffffffc0794a6fb8:
[ 147.963078] 6fb8 00000003 00000000 080fc5e8 ffffff80 00000001 00000000 00000000 00000000
[ 147.971394] 6fd8 00000000 00000000 00000004 00000000 00000010 00000000 0000001b 00000000
[ 147.979709] 6ff8 ffff6b0f 00000000 00000000 00000000 00000000 00000000 00000001 00000000
[ 147.988025] 7018 000000ce 00000000 00000000 00000000 000043b8 00000000 00007b88 00000000
[ 147.996339] 7038 000000ce 00000000 00000000 00000000 00000001 00000001 7db4db90 00000008
[ 148.004654] 7058 18552f7c 00000034 44a07210 ffffffc0 00000000 00000000 00000000 00000000
[ 148.012969] 7078 00000000 00000000 00000000 00000000 00000000 00000000 0000001e 00000000
[ 148.021284] 7098 6132d986 00000022 002356fe 00000000 00b376ae 00000052 00000030 00000000
[ 148.029600]
[ 148.029600] X22: 0xffffffc0794a6f18:
[ 148.034704] 6f18 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.043018] 6f38 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.051334] 6f58 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.059649] 6f78 00000000 00000000 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0
[ 148.067964] 6f98 099b0f60 ffffff80 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80
[ 148.076279] 6fb8 00000003 00000000 080fc5e8 ffffff80 00000001 00000000 00000000 00000000
[ 148.084594] 6fd8 00000000 00000000 00000004 00000000 00000010 00000000 0000001b 00000000
[ 148.092909] 6ff8 ffff6b0f 00000000 00000000 00000000 00000000 00000000 00000001 00000000
[ 148.101226]
[ 148.101226] X25: 0xffffffc035156680:
[ 148.106330] 6680 00000001 00000000 00000000 00000000 00000001 00000000 00000000 00000000
[ 148.114645] 66a0 00000000 00000000 00000000 00000000 00000000 dead4ead ffffffff 00000000
[ 148.122960] 66c0 ffffffff ffffffff 099ad6e8 ffffff80 00000000 00000000 00000000 00000000
[ 148.131276] 66e0 0900a3b0 ffffff80 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.139592] 6700 00002b64 00000000 03938700 00000000 03938700 00000000 00000000 00000000
[ 148.147907] 6720 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.156223] 6740 35156740 ffffffc0 35156740 ffffffc0 35156750 ffffffc0 35156750 ffffffc0
[ 148.164539] 6760 35156760 ffffffc0 35156760 ffffffc0 00000000 00000000 3d922040 ffffffc0
[ 148.172857]
[ 148.172857] X26: 0xffffffc0794a6f00:
[ 148.177960] 6f00 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.186275] 6f20 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.194590] 6f40 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.202905] 6f60 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.211220] 6f80 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0 099b0f60 ffffff80
[ 148.219535] 6fa0 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80 00000003 00000000
[ 148.227850] 6fc0 080fc5e8 ffffff80 00000001 00000000 00000000 00000000 00000000 00000000
[ 148.236165] 6fe0 00000004 00000000 00000010 00000000 0000001b 00000000 ffff6b0f 00000000
[ 148.244482]
[ 148.244482] X29: 0xffffffc03515bcd0:
[ 148.249585] bcd0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000 35156700 ffffffc0
[ 148.257900] bcf0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80 3515bd50 ffffffc0
[ 148.266215] bd10 0811908c ffffff80 3515bd50 ffffffc0 0810a3d8 ffffff80 600001c5 00000000
[ 148.274531] bd30 3515bd80 ffffffc0 081316d4 ffffff80 ffffffff ffffffff 35158000 ffffffc0
[ 148.282846] bd50 3515bde0 ffffffc0 0811908c ffffff80 00000000 00000000 794a6f80 ffffffc0
[ 148.291161] bd70 794a7038 ffffffc0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000
[ 148.299477] bd90 35156700 ffffffc0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80
[ 148.307792] bdb0 794a7038 ffffffc0 794a6f80 ffffffc0 794a7038 ffffffc0 794a6f98 ffffffc0
[ 148.316107]
[ 148.317611] Process swapper/3 (pid: 0, stack limit = 0xffffffc035158020)
[ 148.324384] Stack: (0xffffffc03515bd50 to 0xffffffc03515c000)
[ 148.330191] bd40: ffffffc03515bde0 ffffff800811908c
[ 148.338107] bd60: 0000000000000000 ffffffc0794a6f80 ffffffc0794a7038 ffffffc0794a6f98
[ 148.346022] bd80: ffffff8009854000 0000000000000000 ffffffc035156700 ffffffc0794a6f80
[ 148.353937] bda0: ffffff8008d60000 ffffff8008d60428 ffffffc0794a7038 ffffffc0794a6f80
[ 148.361852] bdc0: ffffffc0794a7038 ffffffc0794a6f98 ffffff8009854000 ffffff80081316d4
[ 148.369767] bde0: ffffffc03515be90 ffffff8008d40cb4 ffffffc0794a6f80 ffffffc035156100
[ 148.377683] be00: 0000000000000000 ffffffc0794a6f98 ffffff8009854000 0000000000000000
[ 148.385598] be20: ffffffc035156700 ffffffc0794a6f80 ffffff8008d60000 ffffff8008d60428
[ 148.393513] be40: ffffff80093def80 ffffffc035156100 ffffffc0794a7038 ffffff80098f7cd0
[ 148.401428] be60: ffffffc0794a7038 ffffff8008d605f0 ffffffc000000000 ffffffc035156100
[ 148.409343] be80: ffffffc0794a6f80 ffffffc035156100 ffffffc03515bf20 ffffff8008d41574
[ 148.417258] bea0: ffffffc035158000 ffffff8008d5f000 ffffff80099a0000 ffffffc071d94400
[ 148.425173] bec0: ffffff8009946ab8 ffffff8009218cc0 ffffff80093ddc50 ffffffc035158000
[ 148.433088] bee0: ffffff800999e000 ffffff8009852000 ffffffc03515bf20 ffffff8008d4156c
[ 148.441004] bf00: ffffffc035158000 ffffff8008d5f000 ffffff80099a0000 ffffff8008d41574
[ 148.448919] bf20: ffffffc03515bf40 ffffff8008d415f0 ffffff8009852000 ffffff8008d5f000
[ 148.456834] bf40: ffffffc03515bf50 ffffff8008121754 ffffffc03515bfc0 ffffff8008090e64
[ 148.464749] bf60: 0000000000000003 ffffff800989e080 ffffffc035158000 0000000000000000
[ 148.472663] bf80: 0000000000000000 0000000000000000 00000000027a9000 00000000027ac000
[ 148.480579] bfa0: ffffff80080828d0 0000000000000000 00000000ffffffff ffffffc035158000
[ 148.488494] bfc0: 0000000000000000 0000000000d4d03c 0000000034d5d91d 0000000000000e12
[ 148.496409] bfe0: 0000000000000000 0000000000000000 00ee003e00e900a5 e9db62ffd3fb42ff
[ 148.504322] Call trace:
[ 148.506793] Exception stack(0xffffffc03515bb80 to 0xffffffc03515bcb0)
[ 148.513303] bb80: 0000000000000000 0000008000000000 ffffffc03515bd50 ffffff800810a3d8
[ 148.521218] bba0: 0000000000000055 0000000000000114 ffffffc03515bcd0 ffffff8008136434
[ 148.529134] bbc0: ffffffc035158000 ffffff800a6f0288 0000000000000000 0000000000000000
[ 148.537048] bbe0: 0000000000000002 0000000000000001 0000000000000000 ffffff800816cae8
[ 148.544963] bc00: 00000000000001c0 ffffff80099a0468 0000000000000000 0000000000000000
[ 148.552878] bc20: ffffff800811908c 0000000000000000 ffffff80081069ec 0000000000000001
[ 148.560793] bc40: ffffff8008118fdc 0000000000000080 0000000000000000 ffffff8008d40980
[ 148.568708] bc60: 0000000000000004 ffffff80098f7000 0000000000000005 ffffff8008d60420
[ 148.576622] bc80: 0000000034d5d91d 0000000000000000 0000000000000000 0000000000000000
[ 148.584536] bca0: 0000000000000000 0000000000000000
[ 148.589468] [<ffffff800810a3d8>] set_next_entity+0x2c/0x10a0
[ 148.595189] [<ffffff800811908c>] pick_next_task_fair+0xb0/0xd10
[ 148.601176] [<ffffff8008d40cb4>] __schedule+0x420/0xc10
[ 148.606458] [<ffffff8008d41574>] schedule+0x40/0xa0
[ 148.611389] [<ffffff8008d415f0>] schedule_preempt_disabled+0x1c/0x2c
[ 148.617815] [<ffffff8008121754>] cpu_startup_entry+0x13c/0x464
[ 148.623713] [<ffffff8008090e64>] secondary_start_kernel+0x164/0x1b4
[ 148.630046] [<0000000000d4d03c>] 0xd4d03c
[ 148.634099] Code: aa0103f3 aa0003f5 aa1e03e0 d503201f (b9403a60)
[ 148.749686] BUG: spinlock lockup suspected on CPU#0, kworker/0:1/578
[ 148.756117] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 148.764563] CPU: 0 PID: 578 Comm: kworker/0:1 Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 148.775637] Hardware name: HiKey Development Board (DT)
[ 148.780924] Workqueue: events_freezable thermal_zone_device_check
[ 148.787086] Call trace:
[ 148.789559] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 148.795016] [<ffffff800808b098>] show_stack+0x20/0x28
[ 148.800125] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 148.805231] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 148.810248] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 148.816057] [<ffffff8008d46fb4>] _raw_spin_lock_irqsave+0x78/0x98
[ 148.822217] [<ffffff8008123a60>] cpufreq_notifier_trans+0x128/0x14c
[ 148.828552] [<ffffff80080ef154>] notifier_call_chain+0x64/0x9c
[ 148.834449] [<ffffff80080efbdc>] __srcu_notifier_call_chain+0xa0/0xf0
[ 148.840958] [<ffffff80080efc64>] srcu_notifier_call_chain+0x38/0x44
[ 148.847296] [<ffffff80088f5644>] cpufreq_notify_transition+0xfc/0x2e0
[ 148.853807] [<ffffff80088f7bec>] cpufreq_freq_transition_end+0x3c/0xb0
[ 148.860405] [<ffffff80088f84a0>] __cpufreq_driver_target+0x1dc/0x320
[ 148.866829] [<ffffff80088fa460>] cpufreq_governor_performance+0x50/0x60
[ 148.873516] [<ffffff80088f6034>] __cpufreq_governor+0xb8/0x1ec
[ 148.879411] [<ffffff80088f6994>] cpufreq_set_policy+0x2ac/0x3f0
[ 148.885394] [<ffffff80088f9164>] cpufreq_update_policy+0x84/0x114
[ 148.891555] [<ffffff80088da4ec>] cpufreq_set_cur_state+0x64/0x94
[ 148.897626] [<ffffff80088d4ca4>] thermal_cdev_update.part.26+0x9c/0x22c
[ 148.904312] [<ffffff80088d5b48>] power_actor_set_power+0x70/0x9c
[ 148.910384] [<ffffff80088d9bc0>] power_allocator_throttle+0x4c8/0xad8
[ 148.916893] [<ffffff80088d4e9c>] handle_thermal_trip.part.21+0x68/0x334
[ 148.923579] [<ffffff80088d56e4>] thermal_zone_device_update+0xb8/0x280
[ 148.930177] [<ffffff80088d58cc>] thermal_zone_device_check+0x20/0x2c
[ 148.936601] [<ffffff80080e55a8>] process_one_work+0x1f8/0x70c
[ 148.942408] [<ffffff80080e5bf8>] worker_thread+0x13c/0x4a4
[ 148.947953] [<ffffff80080ed5cc>] kthread+0xe8/0xfc
[ 148.952796] [<ffffff8008085ed0>] ret_from_fork+0x10/0x40
[ 149.166404] BUG: spinlock lockup suspected on CPU#4, periodic2.sh/2858
[ 149.173013] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 149.181457] CPU: 4 PID: 2858 Comm: periodic2.sh Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 149.192707] Hardware name: HiKey Development Board (DT)
[ 149.197985] Call trace:
[ 149.200458] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 149.205915] [<ffffff800808b098>] show_stack+0x20/0x28
[ 149.211021] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 149.216126] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 149.221144] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 149.226952] [<ffffff8008d46f20>] _raw_spin_lock+0x6c/0x88
[ 149.232411] [<ffffff80080fae64>] __task_rq_lock+0x58/0xdc
[ 149.237868] [<ffffff80080ffb10>] wake_up_new_task+0xdc/0x318
[ 149.243588] [<ffffff80080c5f14>] _do_fork+0xfc/0x6f0
[ 149.248606] [<ffffff80080c6658>] SyS_clone+0x44/0x50
[ 149.253623] [<ffffff8008085f30>] el0_svc_naked+0x24/0x28
[ 149.640303] BUG: spinlock lockup suspected on CPU#3, swapper/3/0
[ 149.646376] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 149.654819] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 149.665542] Hardware name: HiKey Development Board (DT)
[ 149.670820] Call trace:
[ 149.673291] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 149.678748] [<ffffff800808b098>] show_stack+0x20/0x28
[ 149.683853] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 149.688959] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 149.693976] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 149.699783] [<ffffff8008d46f20>] _raw_spin_lock+0x6c/0x88
[ 149.705240] [<ffffff80080fc5e8>] scheduler_tick+0x50/0x2ac
[ 149.710785] [<ffffff800815cf78>] update_process_times+0x58/0x70
[ 149.716769] [<ffffff80081703a4>] tick_sched_timer+0x7c/0xfc
[ 149.722401] [<ffffff800815d5b8>] __hrtimer_run_queues+0x164/0x624
[ 149.728560] [<ffffff800815eb74>] hrtimer_interrupt+0xb0/0x1f4
[ 149.734369] [<ffffff8008933150>] arch_timer_handler_phys+0x3c/0x48
[ 149.740618] [<ffffff8008149e90>] handle_percpu_devid_irq+0xe8/0x3d0
[ 149.746954] [<ffffff8008145104>] generic_handle_irq+0x34/0x4c
[ 149.752761] [<ffffff80081451ac>] __handle_domain_irq+0x90/0xf8
[ 149.758656] [<ffffff8008082544>] gic_handle_irq+0x64/0xc4
[ 149.764113] Exception stack(0xffffffc0792e0050 to 0xffffffc0792e0180)
[ 149.770622] 0040: ffffffc03515b900 0000008000000000
[ 149.778537] 0060: ffffffc03515ba30 ffffff8008d47230 0000000060000145 ffffffc035156100
[ 149.786452] 0080: ffffffc03515ba30 ffffffc03515b900 0000000000000000 0000000000000000
[ 149.794366] 00a0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.802281] 00c0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.810196] 00e0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.818111] 0100: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.826026] 0120: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.833940] 0140: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.841855] 0160: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.849770] [<ffffff80080857b8>] el1_irq+0xb8/0x130
[ 149.854699] [<ffffff8008d47230>] _raw_spin_unlock_irq+0x3c/0x74
[ 149.860682] [<ffffff800808b168>] die+0xc8/0x1b4
[ 149.865262] [<ffffff800809b36c>] __do_kernel_fault.part.6+0x7c/0x90
[ 149.871601] [<ffffff80080986a0>] do_translation_fault+0x0/0xec
[ 149.877498] [<ffffff8008098768>] do_translation_fault+0xc8/0xec
[ 149.883481] [<ffffff80080822e0>] do_mem_abort+0x54/0xb4
[ 149.888761] Exception stack(0xffffffc03515bb80 to 0xffffffc03515bcb0)
[ 149.895272] bb80: 0000000000000000 0000008000000000 ffffffc03515bd50 ffffff800810a3d8
[ 149.903187] bba0: 0000000000000055 0000000000000114 ffffffc03515bcd0 ffffff8008136434
[ 149.911102] bbc0: ffffffc035158000 ffffff800a6f0288 0000000000000000 0000000000000000
[ 149.919017] bbe0: 0000000000000002 0000000000000001 0000000000000000 ffffff800816cae8
[ 149.926932] bc00: 00000000000001c0 ffffff80099a0468 0000000000000000 0000000000000000
[ 149.934847] bc20: ffffff800811908c 0000000000000000 ffffff80081069ec 0000000000000001
[ 149.942762] bc40: ffffff8008118fdc 0000000000000080 0000000000000000 ffffff8008d40980
[ 149.950676] bc60: 0000000000000004 ffffff80098f7000 0000000000000005 ffffff8008d60420
[ 149.958591] bc80: 0000000034d5d91d 0000000000000000 0000000000000000 0000000000000000
[ 149.966505] bca0: 0000000000000000 0000000000000000
[ 149.971434] [<ffffff80080855c8>] el1_da+0x18/0x78
[ 149.976189] [<ffffff800811908c>] pick_next_task_fair+0xb0/0xd10
[ 149.982174] [<ffffff8008d40cb4>] __schedule+0x420/0xc10
[ 149.987456] [<ffffff8008d41574>] schedule+0x40/0xa0
[ 149.992387] [<ffffff8008d415f0>] schedule_preempt_disabled+0x1c/0x2c
[ 149.998809] [<ffffff8008121754>] cpu_startup_entry+0x13c/0x464
[ 150.004705] [<ffffff8008090e64>] secondary_start_kernel+0x164/0x1b4
[ 150.011038] [<0000000000d4d03c>] 0xd4d03c
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEC-4YWfzBPD-4HPG8jqsK4QVis1e1Kfks5sGlxGgaJpZM4N_e0M>
.
|
So, it crashes after it finished to switch between RT and OTHER?
Luca
…On 22 June 2017 at 14:21, Juri Lelli ***@***.***> wrote:
So, periodic and periodic1 doesn't seem to have problems (anymore).
But, periodic2 generates the following (when run for over 100 sec):
[ 147.659662] Unable to handle kernel NULL pointer dereference at virtual address 00000038
[ 147.667862] pgd = ffffff800a7ac000
[ 147.671300] [00000038] *pgd=000000007bffe003, *pud=000000007bffe003, *pmd=0000000000000000
[ 147.679683] Internal error: Oops: 96000006 [#1] PREEMPT SMP
[ 147.685326] CPU: 3 PID: 0 Comm: swapper/3 Not tainted 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 147.694823] Hardware name: HiKey Development Board (DT)
[ 147.700106] task: ffffffc035156100 ti: ffffffc035158000 task.ti: ffffffc035158000
[ 147.707681] PC is at set_next_entity+0x2c/0x10a0
[ 147.712351] LR is at pick_next_task_fair+0xb0/0xd10
[ 147.717281] pc : [<ffffff800810a3d8>] lr : [<ffffff800811908c>] pstate: 600001c5
[ 147.724759] sp : ffffffc03515bd50
[ 147.728107] x29: ffffffc03515bd50 x28: ffffff8008d60428
[ 147.733491] x27: ffffff8008d60000 x26: ffffffc0794a6f80
[ 147.738873] x25: ffffffc035156700 x24: 0000000000000000
[ 147.744254] x23: ffffff8009854000 x22: ffffffc0794a6f98 [ 147.749209] CPU0: update max cpu_capacity 1024
[ 147.753970]
[ 147.755650] x21: ffffffc0794a7038 x20: ffffffc0794a6f80
[ 147.761028] x19: 0000000000000000 x18: 0000000000000000
[ 147.766405] x17: 0000000000000000 x16: 0000000000000000
[ 147.771783] x15: 0000000000000000 x14: 0000000000000000
[ 147.777160] x13: 0000000000000000 x12: 0000000034d5d91d
[ 147.782537] x11: ffffff8008d60420 x10: 0000000000000005
[ 147.787914] x9 : ffffff80098f7000 x8 : 0000000000000004
[ 147.793292] x7 : ffffff8008d40980 x6 : 0000000000000000
[ 147.798668] x5 : 0000000000000080 x4 : ffffff8008118fdc
[ 147.804044] x3 : 0000000000000001 x2 : ffffff80081069ec
[ 147.809421] x1 : 0000000000000000 x0 : ffffff800811908c
[ 147.814801]
[ 147.814801] SP: 0xffffffc03515bcd0:
[ 147.819817] bcd0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000 35156700 ffffffc0
[ 147.828134] bcf0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80 3515bd50 ffffffc0
[ 147.836448] bd10 0811908c ffffff80 3515bd50 ffffffc0 0810a3d8 ffffff80 600001c5 00000000
[ 147.844763] bd30 3515bd80 ffffffc0 081316d4 ffffff80 ffffffff ffffffff 35158000 ffffffc0
[ 147.853079] bd50 3515bde0 ffffffc0 0811908c ffffff80 00000000 00000000 794a6f80 ffffffc0
[ 147.861393] bd70 794a7038 ffffffc0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000
[ 147.869708] bd90 35156700 ffffffc0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80
[ 147.878024] bdb0 794a7038 ffffffc0 794a6f80 ffffffc0 794a7038 ffffffc0 794a6f98 ffffffc0
[ 147.886347]
[ 147.886347] X20: 0xffffffc0794a6f00:
[ 147.891451] 6f00 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.899767] 6f20 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.908082] 6f40 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.916397] 6f60 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 147.924713] 6f80 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0 099b0f60 ffffff80
[ 147.933028] 6fa0 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80 00000003 00000000
[ 147.941343] 6fc0 080fc5e8 ffffff80 00000001 00000000 00000000 00000000 00000000 00000000
[ 147.949658] 6fe0 00000004 00000000 00000010 00000000 0000001b 00000000 ffff6b0f 00000000
[ 147.957974]
[ 147.957974] X21: 0xffffffc0794a6fb8:
[ 147.963078] 6fb8 00000003 00000000 080fc5e8 ffffff80 00000001 00000000 00000000 00000000
[ 147.971394] 6fd8 00000000 00000000 00000004 00000000 00000010 00000000 0000001b 00000000
[ 147.979709] 6ff8 ffff6b0f 00000000 00000000 00000000 00000000 00000000 00000001 00000000
[ 147.988025] 7018 000000ce 00000000 00000000 00000000 000043b8 00000000 00007b88 00000000
[ 147.996339] 7038 000000ce 00000000 00000000 00000000 00000001 00000001 7db4db90 00000008
[ 148.004654] 7058 18552f7c 00000034 44a07210 ffffffc0 00000000 00000000 00000000 00000000
[ 148.012969] 7078 00000000 00000000 00000000 00000000 00000000 00000000 0000001e 00000000
[ 148.021284] 7098 6132d986 00000022 002356fe 00000000 00b376ae 00000052 00000030 00000000
[ 148.029600]
[ 148.029600] X22: 0xffffffc0794a6f18:
[ 148.034704] 6f18 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.043018] 6f38 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.051334] 6f58 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.059649] 6f78 00000000 00000000 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0
[ 148.067964] 6f98 099b0f60 ffffff80 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80
[ 148.076279] 6fb8 00000003 00000000 080fc5e8 ffffff80 00000001 00000000 00000000 00000000
[ 148.084594] 6fd8 00000000 00000000 00000004 00000000 00000010 00000000 0000001b 00000000
[ 148.092909] 6ff8 ffff6b0f 00000000 00000000 00000000 00000000 00000000 00000001 00000000
[ 148.101226]
[ 148.101226] X25: 0xffffffc035156680:
[ 148.106330] 6680 00000001 00000000 00000000 00000000 00000001 00000000 00000000 00000000
[ 148.114645] 66a0 00000000 00000000 00000000 00000000 00000000 dead4ead ffffffff 00000000
[ 148.122960] 66c0 ffffffff ffffffff 099ad6e8 ffffff80 00000000 00000000 00000000 00000000
[ 148.131276] 66e0 0900a3b0 ffffff80 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.139592] 6700 00002b64 00000000 03938700 00000000 03938700 00000000 00000000 00000000
[ 148.147907] 6720 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.156223] 6740 35156740 ffffffc0 35156740 ffffffc0 35156750 ffffffc0 35156750 ffffffc0
[ 148.164539] 6760 35156760 ffffffc0 35156760 ffffffc0 00000000 00000000 3d922040 ffffffc0
[ 148.172857]
[ 148.172857] X26: 0xffffffc0794a6f00:
[ 148.177960] 6f00 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.186275] 6f20 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.194590] 6f40 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.202905] 6f60 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[ 148.211220] 6f80 f009f008 dead4ead 00000003 00000000 35156100 ffffffc0 099b0f60 ffffff80
[ 148.219535] 6fa0 09bcd430 ffffff80 09bdbac0 ffffff80 0900fe08 ffffff80 00000003 00000000
[ 148.227850] 6fc0 080fc5e8 ffffff80 00000001 00000000 00000000 00000000 00000000 00000000
[ 148.236165] 6fe0 00000004 00000000 00000010 00000000 0000001b 00000000 ffff6b0f 00000000
[ 148.244482]
[ 148.244482] X29: 0xffffffc03515bcd0:
[ 148.249585] bcd0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000 35156700 ffffffc0
[ 148.257900] bcf0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80 3515bd50 ffffffc0
[ 148.266215] bd10 0811908c ffffff80 3515bd50 ffffffc0 0810a3d8 ffffff80 600001c5 00000000
[ 148.274531] bd30 3515bd80 ffffffc0 081316d4 ffffff80 ffffffff ffffffff 35158000 ffffffc0
[ 148.282846] bd50 3515bde0 ffffffc0 0811908c ffffff80 00000000 00000000 794a6f80 ffffffc0
[ 148.291161] bd70 794a7038 ffffffc0 794a6f98 ffffffc0 09854000 ffffff80 00000000 00000000
[ 148.299477] bd90 35156700 ffffffc0 794a6f80 ffffffc0 08d60000 ffffff80 08d60428 ffffff80
[ 148.307792] bdb0 794a7038 ffffffc0 794a6f80 ffffffc0 794a7038 ffffffc0 794a6f98 ffffffc0
[ 148.316107]
[ 148.317611] Process swapper/3 (pid: 0, stack limit = 0xffffffc035158020)
[ 148.324384] Stack: (0xffffffc03515bd50 to 0xffffffc03515c000)
[ 148.330191] bd40: ffffffc03515bde0 ffffff800811908c
[ 148.338107] bd60: 0000000000000000 ffffffc0794a6f80 ffffffc0794a7038 ffffffc0794a6f98
[ 148.346022] bd80: ffffff8009854000 0000000000000000 ffffffc035156700 ffffffc0794a6f80
[ 148.353937] bda0: ffffff8008d60000 ffffff8008d60428 ffffffc0794a7038 ffffffc0794a6f80
[ 148.361852] bdc0: ffffffc0794a7038 ffffffc0794a6f98 ffffff8009854000 ffffff80081316d4
[ 148.369767] bde0: ffffffc03515be90 ffffff8008d40cb4 ffffffc0794a6f80 ffffffc035156100
[ 148.377683] be00: 0000000000000000 ffffffc0794a6f98 ffffff8009854000 0000000000000000
[ 148.385598] be20: ffffffc035156700 ffffffc0794a6f80 ffffff8008d60000 ffffff8008d60428
[ 148.393513] be40: ffffff80093def80 ffffffc035156100 ffffffc0794a7038 ffffff80098f7cd0
[ 148.401428] be60: ffffffc0794a7038 ffffff8008d605f0 ffffffc000000000 ffffffc035156100
[ 148.409343] be80: ffffffc0794a6f80 ffffffc035156100 ffffffc03515bf20 ffffff8008d41574
[ 148.417258] bea0: ffffffc035158000 ffffff8008d5f000 ffffff80099a0000 ffffffc071d94400
[ 148.425173] bec0: ffffff8009946ab8 ffffff8009218cc0 ffffff80093ddc50 ffffffc035158000
[ 148.433088] bee0: ffffff800999e000 ffffff8009852000 ffffffc03515bf20 ffffff8008d4156c
[ 148.441004] bf00: ffffffc035158000 ffffff8008d5f000 ffffff80099a0000 ffffff8008d41574
[ 148.448919] bf20: ffffffc03515bf40 ffffff8008d415f0 ffffff8009852000 ffffff8008d5f000
[ 148.456834] bf40: ffffffc03515bf50 ffffff8008121754 ffffffc03515bfc0 ffffff8008090e64
[ 148.464749] bf60: 0000000000000003 ffffff800989e080 ffffffc035158000 0000000000000000
[ 148.472663] bf80: 0000000000000000 0000000000000000 00000000027a9000 00000000027ac000
[ 148.480579] bfa0: ffffff80080828d0 0000000000000000 00000000ffffffff ffffffc035158000
[ 148.488494] bfc0: 0000000000000000 0000000000d4d03c 0000000034d5d91d 0000000000000e12
[ 148.496409] bfe0: 0000000000000000 0000000000000000 00ee003e00e900a5 e9db62ffd3fb42ff
[ 148.504322] Call trace:
[ 148.506793] Exception stack(0xffffffc03515bb80 to 0xffffffc03515bcb0)
[ 148.513303] bb80: 0000000000000000 0000008000000000 ffffffc03515bd50 ffffff800810a3d8
[ 148.521218] bba0: 0000000000000055 0000000000000114 ffffffc03515bcd0 ffffff8008136434
[ 148.529134] bbc0: ffffffc035158000 ffffff800a6f0288 0000000000000000 0000000000000000
[ 148.537048] bbe0: 0000000000000002 0000000000000001 0000000000000000 ffffff800816cae8
[ 148.544963] bc00: 00000000000001c0 ffffff80099a0468 0000000000000000 0000000000000000
[ 148.552878] bc20: ffffff800811908c 0000000000000000 ffffff80081069ec 0000000000000001
[ 148.560793] bc40: ffffff8008118fdc 0000000000000080 0000000000000000 ffffff8008d40980
[ 148.568708] bc60: 0000000000000004 ffffff80098f7000 0000000000000005 ffffff8008d60420
[ 148.576622] bc80: 0000000034d5d91d 0000000000000000 0000000000000000 0000000000000000
[ 148.584536] bca0: 0000000000000000 0000000000000000
[ 148.589468] [<ffffff800810a3d8>] set_next_entity+0x2c/0x10a0
[ 148.595189] [<ffffff800811908c>] pick_next_task_fair+0xb0/0xd10
[ 148.601176] [<ffffff8008d40cb4>] __schedule+0x420/0xc10
[ 148.606458] [<ffffff8008d41574>] schedule+0x40/0xa0
[ 148.611389] [<ffffff8008d415f0>] schedule_preempt_disabled+0x1c/0x2c
[ 148.617815] [<ffffff8008121754>] cpu_startup_entry+0x13c/0x464
[ 148.623713] [<ffffff8008090e64>] secondary_start_kernel+0x164/0x1b4
[ 148.630046] [<0000000000d4d03c>] 0xd4d03c
[ 148.634099] Code: aa0103f3 aa0003f5 aa1e03e0 d503201f (b9403a60)
[ 148.749686] BUG: spinlock lockup suspected on CPU#0, kworker/0:1/578
[ 148.756117] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 148.764563] CPU: 0 PID: 578 Comm: kworker/0:1 Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 148.775637] Hardware name: HiKey Development Board (DT)
[ 148.780924] Workqueue: events_freezable thermal_zone_device_check
[ 148.787086] Call trace:
[ 148.789559] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 148.795016] [<ffffff800808b098>] show_stack+0x20/0x28
[ 148.800125] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 148.805231] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 148.810248] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 148.816057] [<ffffff8008d46fb4>] _raw_spin_lock_irqsave+0x78/0x98
[ 148.822217] [<ffffff8008123a60>] cpufreq_notifier_trans+0x128/0x14c
[ 148.828552] [<ffffff80080ef154>] notifier_call_chain+0x64/0x9c
[ 148.834449] [<ffffff80080efbdc>] __srcu_notifier_call_chain+0xa0/0xf0
[ 148.840958] [<ffffff80080efc64>] srcu_notifier_call_chain+0x38/0x44
[ 148.847296] [<ffffff80088f5644>] cpufreq_notify_transition+0xfc/0x2e0
[ 148.853807] [<ffffff80088f7bec>] cpufreq_freq_transition_end+0x3c/0xb0
[ 148.860405] [<ffffff80088f84a0>] __cpufreq_driver_target+0x1dc/0x320
[ 148.866829] [<ffffff80088fa460>] cpufreq_governor_performance+0x50/0x60
[ 148.873516] [<ffffff80088f6034>] __cpufreq_governor+0xb8/0x1ec
[ 148.879411] [<ffffff80088f6994>] cpufreq_set_policy+0x2ac/0x3f0
[ 148.885394] [<ffffff80088f9164>] cpufreq_update_policy+0x84/0x114
[ 148.891555] [<ffffff80088da4ec>] cpufreq_set_cur_state+0x64/0x94
[ 148.897626] [<ffffff80088d4ca4>] thermal_cdev_update.part.26+0x9c/0x22c
[ 148.904312] [<ffffff80088d5b48>] power_actor_set_power+0x70/0x9c
[ 148.910384] [<ffffff80088d9bc0>] power_allocator_throttle+0x4c8/0xad8
[ 148.916893] [<ffffff80088d4e9c>] handle_thermal_trip.part.21+0x68/0x334
[ 148.923579] [<ffffff80088d56e4>] thermal_zone_device_update+0xb8/0x280
[ 148.930177] [<ffffff80088d58cc>] thermal_zone_device_check+0x20/0x2c
[ 148.936601] [<ffffff80080e55a8>] process_one_work+0x1f8/0x70c
[ 148.942408] [<ffffff80080e5bf8>] worker_thread+0x13c/0x4a4
[ 148.947953] [<ffffff80080ed5cc>] kthread+0xe8/0xfc
[ 148.952796] [<ffffff8008085ed0>] ret_from_fork+0x10/0x40
[ 149.166404] BUG: spinlock lockup suspected on CPU#4, periodic2.sh/2858
[ 149.173013] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 149.181457] CPU: 4 PID: 2858 Comm: periodic2.sh Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 149.192707] Hardware name: HiKey Development Board (DT)
[ 149.197985] Call trace:
[ 149.200458] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 149.205915] [<ffffff800808b098>] show_stack+0x20/0x28
[ 149.211021] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 149.216126] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 149.221144] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 149.226952] [<ffffff8008d46f20>] _raw_spin_lock+0x6c/0x88
[ 149.232411] [<ffffff80080fae64>] __task_rq_lock+0x58/0xdc
[ 149.237868] [<ffffff80080ffb10>] wake_up_new_task+0xdc/0x318
[ 149.243588] [<ffffff80080c5f14>] _do_fork+0xfc/0x6f0
[ 149.248606] [<ffffff80080c6658>] SyS_clone+0x44/0x50
[ 149.253623] [<ffffff8008085f30>] el0_svc_naked+0x24/0x28
[ 149.640303] BUG: spinlock lockup suspected on CPU#3, swapper/3/0
[ 149.646376] lock: 0xffffffc0794a6f80, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
[ 149.654819] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G D 4.4.43-HCBS-Demotion-05354-g332859bfbe08-dirty #4
[ 149.665542] Hardware name: HiKey Development Board (DT)
[ 149.670820] Call trace:
[ 149.673291] [<ffffff800808ae98>] dump_backtrace+0x0/0x1e0
[ 149.678748] [<ffffff800808b098>] show_stack+0x20/0x28
[ 149.683853] [<ffffff8008553374>] dump_stack+0xa8/0xe0
[ 149.688959] [<ffffff800813a4d4>] spin_dump+0x78/0x9c
[ 149.693976] [<ffffff800813a7c8>] do_raw_spin_lock+0x180/0x1b4
[ 149.699783] [<ffffff8008d46f20>] _raw_spin_lock+0x6c/0x88
[ 149.705240] [<ffffff80080fc5e8>] scheduler_tick+0x50/0x2ac
[ 149.710785] [<ffffff800815cf78>] update_process_times+0x58/0x70
[ 149.716769] [<ffffff80081703a4>] tick_sched_timer+0x7c/0xfc
[ 149.722401] [<ffffff800815d5b8>] __hrtimer_run_queues+0x164/0x624
[ 149.728560] [<ffffff800815eb74>] hrtimer_interrupt+0xb0/0x1f4
[ 149.734369] [<ffffff8008933150>] arch_timer_handler_phys+0x3c/0x48
[ 149.740618] [<ffffff8008149e90>] handle_percpu_devid_irq+0xe8/0x3d0
[ 149.746954] [<ffffff8008145104>] generic_handle_irq+0x34/0x4c
[ 149.752761] [<ffffff80081451ac>] __handle_domain_irq+0x90/0xf8
[ 149.758656] [<ffffff8008082544>] gic_handle_irq+0x64/0xc4
[ 149.764113] Exception stack(0xffffffc0792e0050 to 0xffffffc0792e0180)
[ 149.770622] 0040: ffffffc03515b900 0000008000000000
[ 149.778537] 0060: ffffffc03515ba30 ffffff8008d47230 0000000060000145 ffffffc035156100
[ 149.786452] 0080: ffffffc03515ba30 ffffffc03515b900 0000000000000000 0000000000000000
[ 149.794366] 00a0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.802281] 00c0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.810196] 00e0: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.818111] 0100: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.826026] 0120: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.833940] 0140: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.841855] 0160: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 149.849770] [<ffffff80080857b8>] el1_irq+0xb8/0x130
[ 149.854699] [<ffffff8008d47230>] _raw_spin_unlock_irq+0x3c/0x74
[ 149.860682] [<ffffff800808b168>] die+0xc8/0x1b4
[ 149.865262] [<ffffff800809b36c>] __do_kernel_fault.part.6+0x7c/0x90
[ 149.871601] [<ffffff80080986a0>] do_translation_fault+0x0/0xec
[ 149.877498] [<ffffff8008098768>] do_translation_fault+0xc8/0xec
[ 149.883481] [<ffffff80080822e0>] do_mem_abort+0x54/0xb4
[ 149.888761] Exception stack(0xffffffc03515bb80 to 0xffffffc03515bcb0)
[ 149.895272] bb80: 0000000000000000 0000008000000000 ffffffc03515bd50 ffffff800810a3d8
[ 149.903187] bba0: 0000000000000055 0000000000000114 ffffffc03515bcd0 ffffff8008136434
[ 149.911102] bbc0: ffffffc035158000 ffffff800a6f0288 0000000000000000 0000000000000000
[ 149.919017] bbe0: 0000000000000002 0000000000000001 0000000000000000 ffffff800816cae8
[ 149.926932] bc00: 00000000000001c0 ffffff80099a0468 0000000000000000 0000000000000000
[ 149.934847] bc20: ffffff800811908c 0000000000000000 ffffff80081069ec 0000000000000001
[ 149.942762] bc40: ffffff8008118fdc 0000000000000080 0000000000000000 ffffff8008d40980
[ 149.950676] bc60: 0000000000000004 ffffff80098f7000 0000000000000005 ffffff8008d60420
[ 149.958591] bc80: 0000000034d5d91d 0000000000000000 0000000000000000 0000000000000000
[ 149.966505] bca0: 0000000000000000 0000000000000000
[ 149.971434] [<ffffff80080855c8>] el1_da+0x18/0x78
[ 149.976189] [<ffffff800811908c>] pick_next_task_fair+0xb0/0xd10
[ 149.982174] [<ffffff8008d40cb4>] __schedule+0x420/0xc10
[ 149.987456] [<ffffff8008d41574>] schedule+0x40/0xa0
[ 149.992387] [<ffffff8008d415f0>] schedule_preempt_disabled+0x1c/0x2c
[ 149.998809] [<ffffff8008121754>] cpu_startup_entry+0x13c/0x464
[ 150.004705] [<ffffff8008090e64>] secondary_start_kernel+0x164/0x1b4
[ 150.011038] [<0000000000d4d03c>] 0xd4d03c
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEC-4YWfzBPD-4HPG8jqsK4QVis1e1Kfks5sGlxGgaJpZM4N_e0M>
.
|
While switching between the two classes yes. Exactly when I'm not sure without adding some debug output. |
Uhm... So, I do not understand... The script seem to switch beteen FIFO and
OTHER for 10 seconds (20 cycles with sleep 0.5), and the crash happens more
than 10s after the start of the test, right?
Luca
…On 22 June 2017 at 15:46, Juri Lelli ***@***.***> wrote:
While switching between the two classes yes. Exactly when I'm not sure
without adding some debug output.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEC-4aLGH6XsyLfOa86PeXFdQLN-8ijMks5sGnBSgaJpZM4N_e0M>
.
|
No sorry I wasn't clear. I extended the test to 200s and crash seems to happens after 100s (but this varies). |
On 22 June 2017 at 17:26, Juri Lelli ***@***.***> wrote:
No sorry I wasn't clear. I extended the test to 200s and crash seems to
happens after 100s (but this varies).
Ok; I will increase the number of cycles to 400 and retest
Luca
… —
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEC-4VI9WDELNUUIJQAvBIFj3pb5HQ1kks5sGoeJgaJpZM4N_e0M>
.
|
In some cases, the scheduler invokes set_curr_task() before enqueueing the task. But if the task is enqueued as RT, it can be demoted during enqueue... In this case, set_curr_task() is called for the RT scheduling class, but the task ends up being enqueued in the CFS rq... And set_curr_task() is not invoked for the CFS scheduling class! Fix this by explicitly invoking the CFS set_curr_task() (if needed) in case of demotion, before enqueueing in the CFS rq.
The demotion mechanism currently still has some bugs, that can be triggered by using the "periodic1.sh" or "periodic2.sh" scripts. After some experiments, it turned out that these changes improve the stability of the patchset (with this patch, the demotion mechanism can survive to 33mins of "periodic1.sh" or "periodic2.sh". The bugs are probably still there, though.
…l calls Provide a different lockdep key for rxrpc_call::user_mutex when the call is made on a kernel socket, such as by the AFS filesystem. The problem is that lockdep registers a false positive between userspace calling the sendmsg syscall on a user socket where call->user_mutex is held whilst userspace memory is accessed whereas the AFS filesystem may perform operations with mmap_sem held by the caller. In such a case, the following warning is produced. ====================================================== WARNING: possible circular locking dependency detected 4.14.0-fscache+ torvalds#243 Tainted: G E ------------------------------------------------------ modpost/16701 is trying to acquire lock: (&vnode->io_lock){+.+.}, at: [<ffffffffa000fc40>] afs_begin_vnode_operation+0x33/0x77 [kafs] but task is already holding lock: (&mm->mmap_sem){++++}, at: [<ffffffff8104376a>] __do_page_fault+0x1ef/0x486 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&mm->mmap_sem){++++}: __might_fault+0x61/0x89 _copy_from_iter_full+0x40/0x1fa rxrpc_send_data+0x8dc/0xff3 rxrpc_do_sendmsg+0x62f/0x6a1 rxrpc_sendmsg+0x166/0x1b7 sock_sendmsg+0x2d/0x39 ___sys_sendmsg+0x1ad/0x22b __sys_sendmsg+0x41/0x62 do_syscall_64+0x89/0x1be return_from_SYSCALL_64+0x0/0x75 -> #2 (&call->user_mutex){+.+.}: __mutex_lock+0x86/0x7d2 rxrpc_new_client_call+0x378/0x80e rxrpc_kernel_begin_call+0xf3/0x154 afs_make_call+0x195/0x454 [kafs] afs_vl_get_capabilities+0x193/0x198 [kafs] afs_vl_lookup_vldb+0x5f/0x151 [kafs] afs_create_volume+0x2e/0x2f4 [kafs] afs_mount+0x56a/0x8d7 [kafs] mount_fs+0x6a/0x109 vfs_kern_mount+0x67/0x135 do_mount+0x90b/0xb57 SyS_mount+0x72/0x98 do_syscall_64+0x89/0x1be return_from_SYSCALL_64+0x0/0x75 -> #1 (k-sk_lock-AF_RXRPC){+.+.}: lock_sock_nested+0x74/0x8a rxrpc_kernel_begin_call+0x8a/0x154 afs_make_call+0x195/0x454 [kafs] afs_fs_get_capabilities+0x17a/0x17f [kafs] afs_probe_fileserver+0xf7/0x2f0 [kafs] afs_select_fileserver+0x83f/0x903 [kafs] afs_fetch_status+0x89/0x11d [kafs] afs_iget+0x16f/0x4f8 [kafs] afs_mount+0x6c6/0x8d7 [kafs] mount_fs+0x6a/0x109 vfs_kern_mount+0x67/0x135 do_mount+0x90b/0xb57 SyS_mount+0x72/0x98 do_syscall_64+0x89/0x1be return_from_SYSCALL_64+0x0/0x75 -> #0 (&vnode->io_lock){+.+.}: lock_acquire+0x174/0x19f __mutex_lock+0x86/0x7d2 afs_begin_vnode_operation+0x33/0x77 [kafs] afs_fetch_data+0x80/0x12a [kafs] afs_readpages+0x314/0x405 [kafs] __do_page_cache_readahead+0x203/0x2ba filemap_fault+0x179/0x54d __do_fault+0x17/0x60 __handle_mm_fault+0x6d7/0x95c handle_mm_fault+0x24e/0x2a3 __do_page_fault+0x301/0x486 do_page_fault+0x236/0x259 page_fault+0x22/0x30 __clear_user+0x3d/0x60 padzero+0x1c/0x2b load_elf_binary+0x785/0xdc7 search_binary_handler+0x81/0x1ff do_execveat_common.isra.14+0x600/0x888 do_execve+0x1f/0x21 SyS_execve+0x28/0x2f do_syscall_64+0x89/0x1be return_from_SYSCALL_64+0x0/0x75 other info that might help us debug this: Chain exists of: &vnode->io_lock --> &call->user_mutex --> &mm->mmap_sem Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&mm->mmap_sem); lock(&call->user_mutex); lock(&mm->mmap_sem); lock(&vnode->io_lock); *** DEADLOCK *** 1 lock held by modpost/16701: #0: (&mm->mmap_sem){++++}, at: [<ffffffff8104376a>] __do_page_fault+0x1ef/0x486 stack backtrace: CPU: 0 PID: 16701 Comm: modpost Tainted: G E 4.14.0-fscache+ torvalds#243 Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 Call Trace: dump_stack+0x67/0x8e print_circular_bug+0x341/0x34f check_prev_add+0x11f/0x5d4 ? add_lock_to_list.isra.12+0x8b/0x8b ? add_lock_to_list.isra.12+0x8b/0x8b ? __lock_acquire+0xf77/0x10b4 __lock_acquire+0xf77/0x10b4 lock_acquire+0x174/0x19f ? afs_begin_vnode_operation+0x33/0x77 [kafs] __mutex_lock+0x86/0x7d2 ? afs_begin_vnode_operation+0x33/0x77 [kafs] ? afs_begin_vnode_operation+0x33/0x77 [kafs] ? afs_begin_vnode_operation+0x33/0x77 [kafs] afs_begin_vnode_operation+0x33/0x77 [kafs] afs_fetch_data+0x80/0x12a [kafs] afs_readpages+0x314/0x405 [kafs] __do_page_cache_readahead+0x203/0x2ba ? filemap_fault+0x179/0x54d filemap_fault+0x179/0x54d __do_fault+0x17/0x60 __handle_mm_fault+0x6d7/0x95c handle_mm_fault+0x24e/0x2a3 __do_page_fault+0x301/0x486 do_page_fault+0x236/0x259 page_fault+0x22/0x30 RIP: 0010:__clear_user+0x3d/0x60 RSP: 0018:ffff880071e93da0 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 000000000000011c RCX: 000000000000011c RDX: 0000000000000000 RSI: 0000000000000008 RDI: 000000000060f720 RBP: 000000000060f720 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000001 R11: ffff8800b5459b68 R12: ffff8800ce150e00 R13: 000000000060f720 R14: 00000000006127a8 R15: 0000000000000000 padzero+0x1c/0x2b load_elf_binary+0x785/0xdc7 search_binary_handler+0x81/0x1ff do_execveat_common.isra.14+0x600/0x888 do_execve+0x1f/0x21 SyS_execve+0x28/0x2f do_syscall_64+0x89/0x1be entry_SYSCALL64_slow_path+0x25/0x25 RIP: 0033:0x7fdb6009ee07 RSP: 002b:00007fff566d9728 EFLAGS: 00000246 ORIG_RAX: 000000000000003b RAX: ffffffffffffffda RBX: 000055ba57280900 RCX: 00007fdb6009ee07 RDX: 000055ba5727f270 RSI: 000055ba5727cac0 RDI: 000055ba57280900 RBP: 000055ba57280900 R08: 00007fff566d9700 R09: 0000000000000000 R10: 000055ba5727cac0 R11: 0000000000000246 R12: 0000000000000000 R13: 000055ba5727cac0 R14: 000055ba5727f270 R15: 0000000000000000 Signed-off-by: David Howells <[email protected]>
Jiri Pirko says: ==================== mlxsw: GRE offloading fixes Petr says: This patchset fixes a couple bugs in offloading GRE tunnels in mlxsw driver. Patch #1 fixes a problem that local routes pointing at a GRE tunnel device are offloaded even if that netdevice is down. Patch #2 detects that as a result of moving a GRE netdevice to a different VRF, two tunnels now have a conflict of local addresses, something that the mlxsw driver can't offload. Patch #3 fixes a FIB abort caused by forming a route pointing at a GRE tunnel that is eligible for offloading but already onloaded. Patch #4 fixes a problem that next hops migrated to a new RIF kept the old RIF reference, which went dangling shortly afterwards. ==================== Signed-off-by: David S. Miller <[email protected]>
In the function brcmf_sdio_firmware_callback() the driver is unbound from the sdio function devices in the error path. However, the order in which it is done resulted in a use-after-free issue (see brcmf_ops_sdio_remove() in bcmsdh.c). Hence change the order and first unbind sdio function #2 device and then unbind sdio function #1 device. Cc: [email protected] # v4.12.x Fixes: 7a51461 ("brcmfmac: unbind all devices upon failure in firmware callback") Reported-by: Stefan Wahren <[email protected]> Reviewed-by: Hante Meuleman <[email protected]> Reviewed-by: Pieter-Paul Giesberts <[email protected]> Reviewed-by: Franky Lin <[email protected]> Signed-off-by: Arend van Spriel <[email protected]> Signed-off-by: Kalle Valo <[email protected]>
Default value of pcc_subspace_idx is -1. Make sure to check pcc_subspace_idx before using the same as array index. This will avoid following KASAN warnings too. [ 15.113449] ================================================================== [ 15.116983] BUG: KASAN: global-out-of-bounds in cppc_get_perf_caps+0xf3/0x3b0 [ 15.116983] Read of size 8 at addr ffffffffb9a5c0d8 by task swapper/0/1 [ 15.116983] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc2+ #2 [ 15.116983] Hardware name: Dell Inc. OptiPlex 7040/0Y7WYT, BIOS 1.2.8 01/26/2016 [ 15.116983] Call Trace: [ 15.116983] dump_stack+0x7c/0xbb [ 15.116983] print_address_description+0x1df/0x290 [ 15.116983] kasan_report+0x28a/0x370 [ 15.116983] ? cppc_get_perf_caps+0xf3/0x3b0 [ 15.116983] cppc_get_perf_caps+0xf3/0x3b0 [ 15.116983] ? cpc_read+0x210/0x210 [ 15.116983] ? __rdmsr_on_cpu+0x90/0x90 [ 15.116983] ? rdmsrl_on_cpu+0xa9/0xe0 [ 15.116983] ? rdmsr_on_cpu+0x100/0x100 [ 15.116983] ? wrmsrl_on_cpu+0x9c/0xd0 [ 15.116983] ? wrmsrl_on_cpu+0x9c/0xd0 [ 15.116983] ? wrmsr_on_cpu+0xe0/0xe0 [ 15.116983] __intel_pstate_cpu_init.part.16+0x3a2/0x530 [ 15.116983] ? intel_pstate_init_cpu+0x197/0x390 [ 15.116983] ? show_no_turbo+0xe0/0xe0 [ 15.116983] ? __lockdep_init_map+0xa0/0x290 [ 15.116983] intel_pstate_cpu_init+0x30/0x60 [ 15.116983] cpufreq_online+0x155/0xac0 [ 15.116983] cpufreq_add_dev+0x9b/0xb0 [ 15.116983] subsys_interface_register+0x1ae/0x290 [ 15.116983] ? bus_unregister_notifier+0x40/0x40 [ 15.116983] ? mark_held_locks+0x83/0xb0 [ 15.116983] ? _raw_write_unlock_irqrestore+0x32/0x60 [ 15.116983] ? intel_pstate_setup+0xc/0x104 [ 15.116983] ? intel_pstate_setup+0xc/0x104 [ 15.116983] ? cpufreq_register_driver+0x1ce/0x2b0 [ 15.116983] cpufreq_register_driver+0x1ce/0x2b0 [ 15.116983] ? intel_pstate_setup+0x104/0x104 [ 15.116983] intel_pstate_register_driver+0x3a/0xa0 [ 15.116983] intel_pstate_init+0x3c4/0x434 [ 15.116983] ? intel_pstate_setup+0x104/0x104 [ 15.116983] ? intel_pstate_setup+0x104/0x104 [ 15.116983] do_one_initcall+0x9c/0x206 [ 15.116983] ? parameq+0xa0/0xa0 [ 15.116983] ? initcall_blacklisted+0x150/0x150 [ 15.116983] ? lock_downgrade+0x2c0/0x2c0 [ 15.116983] kernel_init_freeable+0x327/0x3f0 [ 15.116983] ? start_kernel+0x612/0x612 [ 15.116983] ? _raw_spin_unlock_irq+0x29/0x40 [ 15.116983] ? finish_task_switch+0xdd/0x320 [ 15.116983] ? finish_task_switch+0x8e/0x320 [ 15.116983] ? rest_init+0xd0/0xd0 [ 15.116983] kernel_init+0xf/0x11a [ 15.116983] ? rest_init+0xd0/0xd0 [ 15.116983] ret_from_fork+0x24/0x30 [ 15.116983] The buggy address belongs to the variable: [ 15.116983] __key.36299+0x38/0x40 [ 15.116983] Memory state around the buggy address: [ 15.116983] ffffffffb9a5bf80: fa fa fa fa 00 fa fa fa fa fa fa fa 00 fa fa fa [ 15.116983] ffffffffb9a5c000: fa fa fa fa 00 fa fa fa fa fa fa fa 00 fa fa fa [ 15.116983] >ffffffffb9a5c080: fa fa fa fa 00 fa fa fa fa fa fa fa 00 00 00 00 [ 15.116983] ^ [ 15.116983] ffffffffb9a5c100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 15.116983] ffffffffb9a5c180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 15.116983] ================================================================== Fixes: 85b1407 (ACPI / CPPC: Make CPPC ACPI driver aware of PCC subspace IDs) Reported-by: Changbin Du <[email protected]> Signed-off-by: George Cherian <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
While doing memory hot-unplug operation on a PowerPC VM running 1024 CPUs with 11TB of ram, I hit the following panic: BUG: Kernel NULL pointer dereference on read at 0x00000007 Faulting instruction address: 0xc000000000456048 Oops: Kernel access of bad area, sig: 11 [#2] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS= 2048 NUMA pSeries Modules linked in: rpadlpar_io rpaphp CPU: 160 PID: 1 Comm: systemd Tainted: G D 5.9.0 #1 NIP: c000000000456048 LR: c000000000455fd4 CTR: c00000000047b350 REGS: c00006028d1b77a0 TRAP: 0300 Tainted: G D (5.9.0) MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 24004228 XER: 00000000 CFAR: c00000000000f1b0 DAR: 0000000000000007 DSISR: 40000000 IRQMASK: 0 GPR00: c000000000455fd4 c00006028d1b7a30 c000000001bec800 0000000000000000 GPR04: 0000000000000dc0 0000000000000000 00000000000374ef c00007c53df99320 GPR08: 000007c53c980000 0000000000000000 000007c53c980000 0000000000000000 GPR12: 0000000000004400 c00000001e8e4400 0000000000000000 0000000000000f6a GPR16: 0000000000000000 c000000001c25930 c000000001d62528 00000000000000c1 GPR20: c000000001d62538 c00006be469e9000 0000000fffffffe0 c0000000003c0ff8 GPR24: 0000000000000018 0000000000000000 0000000000000dc0 0000000000000000 GPR28: c00007c513755700 c000000001c236a4 c00007bc4001f800 0000000000000001 NIP [c000000000456048] __kmalloc_node+0x108/0x790 LR [c000000000455fd4] __kmalloc_node+0x94/0x790 Call Trace: kvmalloc_node+0x58/0x110 mem_cgroup_css_online+0x10c/0x270 online_css+0x48/0xd0 cgroup_apply_control_enable+0x2c4/0x470 cgroup_mkdir+0x408/0x5f0 kernfs_iop_mkdir+0x90/0x100 vfs_mkdir+0x138/0x250 do_mkdirat+0x154/0x1c0 system_call_exception+0xf8/0x200 system_call_common+0xf0/0x27c Instruction dump: e93e0000 e90d0030 39290008 7cc9402a e94d0030 e93e0000 7ce95214 7f89502a 2fbc0000 419e0018 41920230 e9270010 <89290007> 7f994800 419e0220 7ee6bb78 This pointing to the following code: mm/slub.c:2851 if (unlikely(!object || !node_match(page, node))) { c000000000456038: 00 00 bc 2f cmpdi cr7,r28,0 c00000000045603c: 18 00 9e 41 beq cr7,c000000000456054 <__kmalloc_node+0x114> node_match(): mm/slub.c:2491 if (node != NUMA_NO_NODE && page_to_nid(page) != node) c000000000456040: 30 02 92 41 beq cr4,c000000000456270 <__kmalloc_node+0x330> page_to_nid(): include/linux/mm.h:1294 c000000000456044: 10 00 27 e9 ld r9,16(r7) c000000000456048: 07 00 29 89 lbz r9,7(r9) <<<< r9 = NULL node_match(): mm/slub.c:2491 c00000000045604c: 00 48 99 7f cmpw cr7,r25,r9 c000000000456050: 20 02 9e 41 beq cr7,c000000000456270 <__kmalloc_node+0x330> The panic occurred in slab_alloc_node() when checking for the page's node: object = c->freelist; page = c->page; if (unlikely(!object || !node_match(page, node))) { object = __slab_alloc(s, gfpflags, node, addr, c); stat(s, ALLOC_SLOWPATH); The issue is that object is not NULL while page is NULL which is odd but may happen if the cache flush happened after loading object but before loading page. Thus checking for the page pointer is required too. The cache flush is done through an inter processor interrupt when a piece of memory is off-lined. That interrupt is triggered when a memory hot-unplug operation is initiated and offline_pages() is calling the slub's MEM_GOING_OFFLINE callback slab_mem_going_offline_callback() which is calling flush_cpu_slab(). If that interrupt is caught between the reading of c->freelist and the reading of c->page, this could lead to such a situation. That situation is expected and the later call to this_cpu_cmpxchg_double() will detect the change to c->freelist and redo the whole operation. In commit 6159d0f ("mm/slub.c: page is always non-NULL in node_match()") check on the page pointer has been removed assuming that page is always valid when it is called. It happens that this is not true in that particular case, so check for page before calling node_match() here. Fixes: 6159d0f ("mm/slub.c: page is always non-NULL in node_match()") Signed-off-by: Laurent Dufour <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Wei Yang <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Nathan Lynch <[email protected]> Cc: Scott Cheloha <[email protected]> Cc: Michal Hocko <[email protected]> Cc: <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
This fix is for a failure that occurred in the DWARF unwind perf test. Stack unwinders may probe memory when looking for frames. Memory sanitizer will poison and track uninitialized memory on the stack, and on the heap if the value is copied to the heap. This can lead to false memory sanitizer failures for the use of an uninitialized value. Avoid this problem by removing the poison on the copied stack. The full msan failure with track origins looks like: ==2168==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x559ceb10755b in handle_cfi elfutils/libdwfl/frame_unwind.c:648:8 #1 0x559ceb105448 in __libdwfl_frame_unwind elfutils/libdwfl/frame_unwind.c:741:4 #2 0x559ceb0ece90 in dwfl_thread_getframes elfutils/libdwfl/dwfl_frame.c:435:7 #3 0x559ceb0ec6b7 in get_one_thread_frames_cb elfutils/libdwfl/dwfl_frame.c:379:10 #4 0x559ceb0ec6b7 in get_one_thread_cb elfutils/libdwfl/dwfl_frame.c:308:17 #5 0x559ceb0ec6b7 in dwfl_getthreads elfutils/libdwfl/dwfl_frame.c:283:17 #6 0x559ceb0ec6b7 in getthread elfutils/libdwfl/dwfl_frame.c:354:14 #7 0x559ceb0ec6b7 in dwfl_getthread_frames elfutils/libdwfl/dwfl_frame.c:388:10 #8 0x559ceaff6ae6 in unwind__get_entries tools/perf/util/unwind-libdw.c:236:8 #9 0x559ceabc9dbc in test_dwarf_unwind__thread tools/perf/tests/dwarf-unwind.c:111:8 torvalds#10 0x559ceabca5cf in test_dwarf_unwind__compare tools/perf/tests/dwarf-unwind.c:138:26 torvalds#11 0x7f812a6865b0 in bsearch (libc.so.6+0x4e5b0) torvalds#12 0x559ceabca871 in test_dwarf_unwind__krava_3 tools/perf/tests/dwarf-unwind.c:162:2 torvalds#13 0x559ceabca926 in test_dwarf_unwind__krava_2 tools/perf/tests/dwarf-unwind.c:169:9 torvalds#14 0x559ceabca946 in test_dwarf_unwind__krava_1 tools/perf/tests/dwarf-unwind.c:174:9 torvalds#15 0x559ceabcae12 in test__dwarf_unwind tools/perf/tests/dwarf-unwind.c:211:8 torvalds#16 0x559ceabbc4ab in run_test tools/perf/tests/builtin-test.c:418:9 torvalds#17 0x559ceabbc4ab in test_and_print tools/perf/tests/builtin-test.c:448:9 torvalds#18 0x559ceabbac70 in __cmd_test tools/perf/tests/builtin-test.c:669:4 torvalds#19 0x559ceabbac70 in cmd_test tools/perf/tests/builtin-test.c:815:9 torvalds#20 0x559cea960e30 in run_builtin tools/perf/perf.c:313:11 torvalds#21 0x559cea95fbce in handle_internal_command tools/perf/perf.c:365:8 torvalds#22 0x559cea95fbce in run_argv tools/perf/perf.c:409:2 torvalds#23 0x559cea95fbce in main tools/perf/perf.c:539:3 Uninitialized value was stored to memory at #0 0x559ceb106acf in __libdwfl_frame_reg_set elfutils/libdwfl/frame_unwind.c:77:22 #1 0x559ceb106acf in handle_cfi elfutils/libdwfl/frame_unwind.c:627:13 #2 0x559ceb105448 in __libdwfl_frame_unwind elfutils/libdwfl/frame_unwind.c:741:4 #3 0x559ceb0ece90 in dwfl_thread_getframes elfutils/libdwfl/dwfl_frame.c:435:7 #4 0x559ceb0ec6b7 in get_one_thread_frames_cb elfutils/libdwfl/dwfl_frame.c:379:10 #5 0x559ceb0ec6b7 in get_one_thread_cb elfutils/libdwfl/dwfl_frame.c:308:17 #6 0x559ceb0ec6b7 in dwfl_getthreads elfutils/libdwfl/dwfl_frame.c:283:17 #7 0x559ceb0ec6b7 in getthread elfutils/libdwfl/dwfl_frame.c:354:14 #8 0x559ceb0ec6b7 in dwfl_getthread_frames elfutils/libdwfl/dwfl_frame.c:388:10 #9 0x559ceaff6ae6 in unwind__get_entries tools/perf/util/unwind-libdw.c:236:8 torvalds#10 0x559ceabc9dbc in test_dwarf_unwind__thread tools/perf/tests/dwarf-unwind.c:111:8 torvalds#11 0x559ceabca5cf in test_dwarf_unwind__compare tools/perf/tests/dwarf-unwind.c:138:26 torvalds#12 0x7f812a6865b0 in bsearch (libc.so.6+0x4e5b0) torvalds#13 0x559ceabca871 in test_dwarf_unwind__krava_3 tools/perf/tests/dwarf-unwind.c:162:2 torvalds#14 0x559ceabca926 in test_dwarf_unwind__krava_2 tools/perf/tests/dwarf-unwind.c:169:9 torvalds#15 0x559ceabca946 in test_dwarf_unwind__krava_1 tools/perf/tests/dwarf-unwind.c:174:9 torvalds#16 0x559ceabcae12 in test__dwarf_unwind tools/perf/tests/dwarf-unwind.c:211:8 torvalds#17 0x559ceabbc4ab in run_test tools/perf/tests/builtin-test.c:418:9 torvalds#18 0x559ceabbc4ab in test_and_print tools/perf/tests/builtin-test.c:448:9 torvalds#19 0x559ceabbac70 in __cmd_test tools/perf/tests/builtin-test.c:669:4 torvalds#20 0x559ceabbac70 in cmd_test tools/perf/tests/builtin-test.c:815:9 torvalds#21 0x559cea960e30 in run_builtin tools/perf/perf.c:313:11 torvalds#22 0x559cea95fbce in handle_internal_command tools/perf/perf.c:365:8 torvalds#23 0x559cea95fbce in run_argv tools/perf/perf.c:409:2 torvalds#24 0x559cea95fbce in main tools/perf/perf.c:539:3 Uninitialized value was stored to memory at #0 0x559ceb106a54 in handle_cfi elfutils/libdwfl/frame_unwind.c:613:9 #1 0x559ceb105448 in __libdwfl_frame_unwind elfutils/libdwfl/frame_unwind.c:741:4 #2 0x559ceb0ece90 in dwfl_thread_getframes elfutils/libdwfl/dwfl_frame.c:435:7 #3 0x559ceb0ec6b7 in get_one_thread_frames_cb elfutils/libdwfl/dwfl_frame.c:379:10 #4 0x559ceb0ec6b7 in get_one_thread_cb elfutils/libdwfl/dwfl_frame.c:308:17 #5 0x559ceb0ec6b7 in dwfl_getthreads elfutils/libdwfl/dwfl_frame.c:283:17 #6 0x559ceb0ec6b7 in getthread elfutils/libdwfl/dwfl_frame.c:354:14 #7 0x559ceb0ec6b7 in dwfl_getthread_frames elfutils/libdwfl/dwfl_frame.c:388:10 #8 0x559ceaff6ae6 in unwind__get_entries tools/perf/util/unwind-libdw.c:236:8 #9 0x559ceabc9dbc in test_dwarf_unwind__thread tools/perf/tests/dwarf-unwind.c:111:8 torvalds#10 0x559ceabca5cf in test_dwarf_unwind__compare tools/perf/tests/dwarf-unwind.c:138:26 torvalds#11 0x7f812a6865b0 in bsearch (libc.so.6+0x4e5b0) torvalds#12 0x559ceabca871 in test_dwarf_unwind__krava_3 tools/perf/tests/dwarf-unwind.c:162:2 torvalds#13 0x559ceabca926 in test_dwarf_unwind__krava_2 tools/perf/tests/dwarf-unwind.c:169:9 torvalds#14 0x559ceabca946 in test_dwarf_unwind__krava_1 tools/perf/tests/dwarf-unwind.c:174:9 torvalds#15 0x559ceabcae12 in test__dwarf_unwind tools/perf/tests/dwarf-unwind.c:211:8 torvalds#16 0x559ceabbc4ab in run_test tools/perf/tests/builtin-test.c:418:9 torvalds#17 0x559ceabbc4ab in test_and_print tools/perf/tests/builtin-test.c:448:9 torvalds#18 0x559ceabbac70 in __cmd_test tools/perf/tests/builtin-test.c:669:4 torvalds#19 0x559ceabbac70 in cmd_test tools/perf/tests/builtin-test.c:815:9 torvalds#20 0x559cea960e30 in run_builtin tools/perf/perf.c:313:11 torvalds#21 0x559cea95fbce in handle_internal_command tools/perf/perf.c:365:8 torvalds#22 0x559cea95fbce in run_argv tools/perf/perf.c:409:2 torvalds#23 0x559cea95fbce in main tools/perf/perf.c:539:3 Uninitialized value was stored to memory at #0 0x559ceaff8800 in memory_read tools/perf/util/unwind-libdw.c:156:10 #1 0x559ceb10f053 in expr_eval elfutils/libdwfl/frame_unwind.c:501:13 #2 0x559ceb1060cc in handle_cfi elfutils/libdwfl/frame_unwind.c:603:18 #3 0x559ceb105448 in __libdwfl_frame_unwind elfutils/libdwfl/frame_unwind.c:741:4 #4 0x559ceb0ece90 in dwfl_thread_getframes elfutils/libdwfl/dwfl_frame.c:435:7 #5 0x559ceb0ec6b7 in get_one_thread_frames_cb elfutils/libdwfl/dwfl_frame.c:379:10 #6 0x559ceb0ec6b7 in get_one_thread_cb elfutils/libdwfl/dwfl_frame.c:308:17 #7 0x559ceb0ec6b7 in dwfl_getthreads elfutils/libdwfl/dwfl_frame.c:283:17 #8 0x559ceb0ec6b7 in getthread elfutils/libdwfl/dwfl_frame.c:354:14 #9 0x559ceb0ec6b7 in dwfl_getthread_frames elfutils/libdwfl/dwfl_frame.c:388:10 torvalds#10 0x559ceaff6ae6 in unwind__get_entries tools/perf/util/unwind-libdw.c:236:8 torvalds#11 0x559ceabc9dbc in test_dwarf_unwind__thread tools/perf/tests/dwarf-unwind.c:111:8 torvalds#12 0x559ceabca5cf in test_dwarf_unwind__compare tools/perf/tests/dwarf-unwind.c:138:26 torvalds#13 0x7f812a6865b0 in bsearch (libc.so.6+0x4e5b0) torvalds#14 0x559ceabca871 in test_dwarf_unwind__krava_3 tools/perf/tests/dwarf-unwind.c:162:2 torvalds#15 0x559ceabca926 in test_dwarf_unwind__krava_2 tools/perf/tests/dwarf-unwind.c:169:9 torvalds#16 0x559ceabca946 in test_dwarf_unwind__krava_1 tools/perf/tests/dwarf-unwind.c:174:9 torvalds#17 0x559ceabcae12 in test__dwarf_unwind tools/perf/tests/dwarf-unwind.c:211:8 torvalds#18 0x559ceabbc4ab in run_test tools/perf/tests/builtin-test.c:418:9 torvalds#19 0x559ceabbc4ab in test_and_print tools/perf/tests/builtin-test.c:448:9 torvalds#20 0x559ceabbac70 in __cmd_test tools/perf/tests/builtin-test.c:669:4 torvalds#21 0x559ceabbac70 in cmd_test tools/perf/tests/builtin-test.c:815:9 torvalds#22 0x559cea960e30 in run_builtin tools/perf/perf.c:313:11 torvalds#23 0x559cea95fbce in handle_internal_command tools/perf/perf.c:365:8 torvalds#24 0x559cea95fbce in run_argv tools/perf/perf.c:409:2 torvalds#25 0x559cea95fbce in main tools/perf/perf.c:539:3 Uninitialized value was stored to memory at #0 0x559cea9027d9 in __msan_memcpy llvm/llvm-project/compiler-rt/lib/msan/msan_interceptors.cpp:1558:3 #1 0x559cea9d2185 in sample_ustack tools/perf/arch/x86/tests/dwarf-unwind.c:41:2 #2 0x559cea9d202c in test__arch_unwind_sample tools/perf/arch/x86/tests/dwarf-unwind.c:72:9 #3 0x559ceabc9cbd in test_dwarf_unwind__thread tools/perf/tests/dwarf-unwind.c:106:6 #4 0x559ceabca5cf in test_dwarf_unwind__compare tools/perf/tests/dwarf-unwind.c:138:26 #5 0x7f812a6865b0 in bsearch (libc.so.6+0x4e5b0) #6 0x559ceabca871 in test_dwarf_unwind__krava_3 tools/perf/tests/dwarf-unwind.c:162:2 #7 0x559ceabca926 in test_dwarf_unwind__krava_2 tools/perf/tests/dwarf-unwind.c:169:9 #8 0x559ceabca946 in test_dwarf_unwind__krava_1 tools/perf/tests/dwarf-unwind.c:174:9 #9 0x559ceabcae12 in test__dwarf_unwind tools/perf/tests/dwarf-unwind.c:211:8 torvalds#10 0x559ceabbc4ab in run_test tools/perf/tests/builtin-test.c:418:9 torvalds#11 0x559ceabbc4ab in test_and_print tools/perf/tests/builtin-test.c:448:9 torvalds#12 0x559ceabbac70 in __cmd_test tools/perf/tests/builtin-test.c:669:4 torvalds#13 0x559ceabbac70 in cmd_test tools/perf/tests/builtin-test.c:815:9 torvalds#14 0x559cea960e30 in run_builtin tools/perf/perf.c:313:11 torvalds#15 0x559cea95fbce in handle_internal_command tools/perf/perf.c:365:8 torvalds#16 0x559cea95fbce in run_argv tools/perf/perf.c:409:2 torvalds#17 0x559cea95fbce in main tools/perf/perf.c:539:3 Uninitialized value was created by an allocation of 'bf' in the stack frame of function 'perf_event__synthesize_mmap_events' #0 0x559ceafc5f60 in perf_event__synthesize_mmap_events tools/perf/util/synthetic-events.c:445 SUMMARY: MemorySanitizer: use-of-uninitialized-value elfutils/libdwfl/frame_unwind.c:648:8 in handle_cfi Signed-off-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: [email protected] Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sandeep Dasgupta <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Actually, burst size is equal to '1 << desc->rqcfg.brst_size'. we should use burst size, not desc->rqcfg.brst_size. dma memcpy performance on Rockchip RV1126 @ 1512MHz A7, 1056MHz LPDDR3, 200MHz DMA: dmatest: /# echo dma0chan0 > /sys/module/dmatest/parameters/channel /# echo 4194304 > /sys/module/dmatest/parameters/test_buf_size /# echo 8 > /sys/module/dmatest/parameters/iterations /# echo y > /sys/module/dmatest/parameters/norandom /# echo y > /sys/module/dmatest/parameters/verbose /# echo 1 > /sys/module/dmatest/parameters/run dmatest: dma0chan0-copy0: result #1: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #2: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #3: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #4: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #5: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #6: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #7: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 dmatest: dma0chan0-copy0: result #8: 'test passed' with src_off=0x0 dst_off=0x0 len=0x400000 Before: dmatest: dma0chan0-copy0: summary 8 tests, 0 failures 48 iops 200338 KB/s (0) After this patch: dmatest: dma0chan0-copy0: summary 8 tests, 0 failures 179 iops 734873 KB/s (0) After this patch and increase dma clk to 400MHz: dmatest: dma0chan0-copy0: summary 8 tests, 0 failures 259 iops 1062929 KB/s (0) Signed-off-by: Sugar Zhang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Vinod Koul <[email protected]>
Ido Schimmel says: ==================== mlxsw: Couple of fixes Patch #1 fixes firmware flashing when CONFIG_MLXSW_CORE=y and CONFIG_MLXFW=m. Patch #2 prevents EMAD transactions from needlessly failing when the system is under heavy load by using exponential backoff. Please consider patch #2 for stable. ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
…asid() While digesting the XSAVE-related horrors which got introduced with the supervisor/user split, the recent addition of ENQCMD-related functionality got on the radar and turned out to be similarly broken. update_pasid(), which is only required when X86_FEATURE_ENQCMD is available, is invoked from two places: 1) From switch_to() for the incoming task 2) Via a SMP function call from the IOMMU/SMV code #1 is half-ways correct as it hacks around the brokenness of get_xsave_addr() by enforcing the state to be 'present', but all the conditionals in that code are completely pointless for that. Also the invocation is just useless overhead because at that point it's guaranteed that TIF_NEED_FPU_LOAD is set on the incoming task and all of this can be handled at return to user space. #2 is broken beyond repair. The comment in the code claims that it is safe to invoke this in an IPI, but that's just wishful thinking. FPU state of a running task is protected by fregs_lock() which is nothing else than a local_bh_disable(). As BH-disabled regions run usually with interrupts enabled the IPI can hit a code section which modifies FPU state and there is absolutely no guarantee that any of the assumptions which are made for the IPI case is true. Also the IPI is sent to all CPUs in mm_cpumask(mm), but the IPI is invoked with a NULL pointer argument, so it can hit a completely unrelated task and unconditionally force an update for nothing. Worse, it can hit a kernel thread which operates on a user space address space and set a random PASID for it. The offending commit does not cleanly revert, but it's sufficient to force disable X86_FEATURE_ENQCMD and to remove the broken update_pasid() code to make this dysfunctional all over the place. Anything more complex would require more surgery and none of the related functions outside of the x86 core code are blatantly wrong, so removing those would be overkill. As nothing enables the PASID bit in the IA32_XSS MSR yet, which is required to make this actually work, this cannot result in a regression except for related out of tree train-wrecks, but they are broken already today. Fixes: 20f0afd ("x86/mmu: Allocate/free a PASID") Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
ASan reported a memory leak caused by info_linear not being deallocated. The info_linear was allocated during in perf_event__synthesize_one_bpf_prog(). This patch adds the corresponding free() when bpf_prog_info_node is freed in perf_env__purge_bpf(). $ sudo ./perf record -- sleep 5 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.025 MB perf.data (8 samples) ] ================================================================= ==297735==ERROR: LeakSanitizer: detected memory leaks Direct leak of 7688 byte(s) in 19 object(s) allocated from: #0 0x4f420f in malloc (/home/user/linux/tools/perf/perf+0x4f420f) #1 0xc06a74 in bpf_program__get_prog_info_linear /home/user/linux/tools/lib/bpf/libbpf.c:11113:16 #2 0xb426fe in perf_event__synthesize_one_bpf_prog /home/user/linux/tools/perf/util/bpf-event.c:191:16 #3 0xb42008 in perf_event__synthesize_bpf_events /home/user/linux/tools/perf/util/bpf-event.c:410:9 #4 0x594596 in record__synthesize /home/user/linux/tools/perf/builtin-record.c:1490:8 #5 0x58c9ac in __cmd_record /home/user/linux/tools/perf/builtin-record.c:1798:8 #6 0x58990b in cmd_record /home/user/linux/tools/perf/builtin-record.c:2901:8 #7 0x7b2a20 in run_builtin /home/user/linux/tools/perf/perf.c:313:11 #8 0x7b12ff in handle_internal_command /home/user/linux/tools/perf/perf.c:365:8 #9 0x7b2583 in run_argv /home/user/linux/tools/perf/perf.c:409:2 torvalds#10 0x7b0d79 in main /home/user/linux/tools/perf/perf.c:539:3 torvalds#11 0x7fa357ef6b74 in __libc_start_main /usr/src/debug/glibc-2.33-8.fc34.x86_64/csu/../csu/libc-start.c:332:16 Signed-off-by: Riccardo Mancini <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Fastabend <[email protected]> Cc: KP Singh <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Song Liu <[email protected]> Cc: Yonghong Song <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Add the following Telit FD980 composition 0x1056: Cfg #1: mass storage Cfg #2: rndis, tty, adb, tty, tty, tty, tty Signed-off-by: Daniele Palmas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Cc: [email protected] Signed-off-by: Johan Hovold <[email protected]>
Often some test cases like btrfs/161 trigger lockdep splats that complain about possible unsafe lock scenario due to the fact that during mount, when reading the chunk tree we end up calling blkdev_get_by_path() while holding a read lock on a leaf of the chunk tree. That produces a lockdep splat like the following: [ 3653.683975] ====================================================== [ 3653.685148] WARNING: possible circular locking dependency detected [ 3653.686301] 5.15.0-rc7-btrfs-next-103 #1 Not tainted [ 3653.687239] ------------------------------------------------------ [ 3653.688400] mount/447465 is trying to acquire lock: [ 3653.689320] ffff8c6b0c76e528 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.691054] but task is already holding lock: [ 3653.692155] ffff8c6b0a9f39e0 (btrfs-chunk-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x24/0x110 [btrfs] [ 3653.693978] which lock already depends on the new lock. [ 3653.695510] the existing dependency chain (in reverse order) is: [ 3653.696915] -> #3 (btrfs-chunk-00){++++}-{3:3}: [ 3653.698053] down_read_nested+0x4b/0x140 [ 3653.698893] __btrfs_tree_read_lock+0x24/0x110 [btrfs] [ 3653.699988] btrfs_read_lock_root_node+0x31/0x40 [btrfs] [ 3653.701205] btrfs_search_slot+0x537/0xc00 [btrfs] [ 3653.702234] btrfs_insert_empty_items+0x32/0x70 [btrfs] [ 3653.703332] btrfs_init_new_device+0x563/0x15b0 [btrfs] [ 3653.704439] btrfs_ioctl+0x2110/0x3530 [btrfs] [ 3653.705405] __x64_sys_ioctl+0x83/0xb0 [ 3653.706215] do_syscall_64+0x3b/0xc0 [ 3653.706990] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3653.708040] -> #2 (sb_internal#2){.+.+}-{0:0}: [ 3653.708994] lock_release+0x13d/0x4a0 [ 3653.709533] up_write+0x18/0x160 [ 3653.710017] btrfs_sync_file+0x3f3/0x5b0 [btrfs] [ 3653.710699] __loop_update_dio+0xbd/0x170 [loop] [ 3653.711360] lo_ioctl+0x3b1/0x8a0 [loop] [ 3653.711929] block_ioctl+0x48/0x50 [ 3653.712442] __x64_sys_ioctl+0x83/0xb0 [ 3653.712991] do_syscall_64+0x3b/0xc0 [ 3653.713519] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3653.714233] -> #1 (&lo->lo_mutex){+.+.}-{3:3}: [ 3653.715026] __mutex_lock+0x92/0x900 [ 3653.715648] lo_open+0x28/0x60 [loop] [ 3653.716275] blkdev_get_whole+0x28/0x90 [ 3653.716867] blkdev_get_by_dev.part.0+0x142/0x320 [ 3653.717537] blkdev_open+0x5e/0xa0 [ 3653.718043] do_dentry_open+0x163/0x390 [ 3653.718604] path_openat+0x3f0/0xa80 [ 3653.719128] do_filp_open+0xa9/0x150 [ 3653.719652] do_sys_openat2+0x97/0x160 [ 3653.720197] __x64_sys_openat+0x54/0x90 [ 3653.720766] do_syscall_64+0x3b/0xc0 [ 3653.721285] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3653.721986] -> #0 (&disk->open_mutex){+.+.}-{3:3}: [ 3653.722775] __lock_acquire+0x130e/0x2210 [ 3653.723348] lock_acquire+0xd7/0x310 [ 3653.723867] __mutex_lock+0x92/0x900 [ 3653.724394] blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.725041] blkdev_get_by_path+0xb8/0xd0 [ 3653.725614] btrfs_get_bdev_and_sb+0x1b/0xb0 [btrfs] [ 3653.726332] open_fs_devices+0xd7/0x2c0 [btrfs] [ 3653.726999] btrfs_read_chunk_tree+0x3ad/0x870 [btrfs] [ 3653.727739] open_ctree+0xb8e/0x17bf [btrfs] [ 3653.728384] btrfs_mount_root.cold+0x12/0xde [btrfs] [ 3653.729130] legacy_get_tree+0x30/0x50 [ 3653.729676] vfs_get_tree+0x28/0xc0 [ 3653.730192] vfs_kern_mount.part.0+0x71/0xb0 [ 3653.730800] btrfs_mount+0x11d/0x3a0 [btrfs] [ 3653.731427] legacy_get_tree+0x30/0x50 [ 3653.731970] vfs_get_tree+0x28/0xc0 [ 3653.732486] path_mount+0x2d4/0xbe0 [ 3653.732997] __x64_sys_mount+0x103/0x140 [ 3653.733560] do_syscall_64+0x3b/0xc0 [ 3653.734080] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3653.734782] other info that might help us debug this: [ 3653.735784] Chain exists of: &disk->open_mutex --> sb_internal#2 --> btrfs-chunk-00 [ 3653.737123] Possible unsafe locking scenario: [ 3653.737865] CPU0 CPU1 [ 3653.738435] ---- ---- [ 3653.739007] lock(btrfs-chunk-00); [ 3653.739449] lock(sb_internal#2); [ 3653.740193] lock(btrfs-chunk-00); [ 3653.740955] lock(&disk->open_mutex); [ 3653.741431] *** DEADLOCK *** [ 3653.742176] 3 locks held by mount/447465: [ 3653.742739] #0: ffff8c6acf85c0e8 (&type->s_umount_key#44/1){+.+.}-{3:3}, at: alloc_super+0xd5/0x3b0 [ 3653.744114] #1: ffffffffc0b28f70 (uuid_mutex){+.+.}-{3:3}, at: btrfs_read_chunk_tree+0x59/0x870 [btrfs] [ 3653.745563] #2: ffff8c6b0a9f39e0 (btrfs-chunk-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x24/0x110 [btrfs] [ 3653.747066] stack backtrace: [ 3653.747723] CPU: 4 PID: 447465 Comm: mount Not tainted 5.15.0-rc7-btrfs-next-103 #1 [ 3653.748873] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 3653.750592] Call Trace: [ 3653.750967] dump_stack_lvl+0x57/0x72 [ 3653.751526] check_noncircular+0xf3/0x110 [ 3653.752136] ? stack_trace_save+0x4b/0x70 [ 3653.752748] __lock_acquire+0x130e/0x2210 [ 3653.753356] lock_acquire+0xd7/0x310 [ 3653.753898] ? blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.754596] ? lock_is_held_type+0xe8/0x140 [ 3653.755125] ? blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.755729] ? blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.756338] __mutex_lock+0x92/0x900 [ 3653.756794] ? blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.757400] ? do_raw_spin_unlock+0x4b/0xa0 [ 3653.757930] ? _raw_spin_unlock+0x29/0x40 [ 3653.758437] ? bd_prepare_to_claim+0x129/0x150 [ 3653.758999] ? trace_module_get+0x2b/0xd0 [ 3653.759508] ? try_module_get.part.0+0x50/0x80 [ 3653.760072] blkdev_get_by_dev.part.0+0xe7/0x320 [ 3653.760661] ? devcgroup_check_permission+0xc1/0x1f0 [ 3653.761288] blkdev_get_by_path+0xb8/0xd0 [ 3653.761797] btrfs_get_bdev_and_sb+0x1b/0xb0 [btrfs] [ 3653.762454] open_fs_devices+0xd7/0x2c0 [btrfs] [ 3653.763055] ? clone_fs_devices+0x8f/0x170 [btrfs] [ 3653.763689] btrfs_read_chunk_tree+0x3ad/0x870 [btrfs] [ 3653.764370] ? kvm_sched_clock_read+0x14/0x40 [ 3653.764922] open_ctree+0xb8e/0x17bf [btrfs] [ 3653.765493] ? super_setup_bdi_name+0x79/0xd0 [ 3653.766043] btrfs_mount_root.cold+0x12/0xde [btrfs] [ 3653.766780] ? rcu_read_lock_sched_held+0x3f/0x80 [ 3653.767488] ? kfree+0x1f2/0x3c0 [ 3653.767979] legacy_get_tree+0x30/0x50 [ 3653.768548] vfs_get_tree+0x28/0xc0 [ 3653.769076] vfs_kern_mount.part.0+0x71/0xb0 [ 3653.769718] btrfs_mount+0x11d/0x3a0 [btrfs] [ 3653.770381] ? rcu_read_lock_sched_held+0x3f/0x80 [ 3653.771086] ? kfree+0x1f2/0x3c0 [ 3653.771574] legacy_get_tree+0x30/0x50 [ 3653.772136] vfs_get_tree+0x28/0xc0 [ 3653.772673] path_mount+0x2d4/0xbe0 [ 3653.773201] __x64_sys_mount+0x103/0x140 [ 3653.773793] do_syscall_64+0x3b/0xc0 [ 3653.774333] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3653.775094] RIP: 0033:0x7f648bc45aaa This happens because through btrfs_read_chunk_tree(), which is called only during mount, ends up acquiring the mutex open_mutex of a block device while holding a read lock on a leaf of the chunk tree while other paths need to acquire other locks before locking extent buffers of the chunk tree. Since at mount time when we call btrfs_read_chunk_tree() we know that we don't have other tasks running in parallel and modifying the chunk tree, we can simply skip locking of chunk tree extent buffers. So do that and move the assertion that checks the fs is not yet mounted to the top block of btrfs_read_chunk_tree(), with a comment before doing it. Signed-off-by: Filipe Manana <[email protected]> Signed-off-by: David Sterba <[email protected]>
In thread__comm_len(),strlen() is called outside of the thread->comm_lock critical section,which may cause a UAF problems if comm__free() is called by the process_thread concurrently. backtrace of the core file is as follows: (gdb) bt #0 __strlen_evex () at ../sysdeps/x86_64/multiarch/strlen-evex.S:77 #1 0x000055ad15d31de5 in thread__comm_len (thread=0x7f627d20e300) at util/thread.c:320 #2 0x000055ad15d4fade in hists__calc_col_len (h=0x7f627d295940, hists=0x55ad1772bfe0) at util/hist.c:103 #3 hists__calc_col_len (hists=0x55ad1772bfe0, h=0x7f627d295940) at util/hist.c:79 #4 0x000055ad15d52c8c in output_resort (hists=hists@entry=0x55ad1772bfe0, prog=0x0, use_callchain=false, cb=cb@entry=0x0, cb_arg=0x0) at util/hist.c:1926 #5 0x000055ad15d530a4 in evsel__output_resort_cb (evsel=evsel@entry=0x55ad1772bde0, prog=prog@entry=0x0, cb=cb@entry=0x0, cb_arg=cb_arg@entry=0x0) at util/hist.c:1945 #6 0x000055ad15d53110 in evsel__output_resort (evsel=evsel@entry=0x55ad1772bde0, prog=prog@entry=0x0) at util/hist.c:1950 #7 0x000055ad15c6ae9a in perf_top__resort_hists (t=t@entry=0x7ffcd9cbf4f0) at builtin-top.c:311 #8 0x000055ad15c6cc6d in perf_top__print_sym_table (top=0x7ffcd9cbf4f0) at builtin-top.c:346 #9 display_thread (arg=0x7ffcd9cbf4f0) at builtin-top.c:700 torvalds#10 0x00007f6282fab4fa in start_thread (arg=<optimized out>) at pthread_create.c:443 torvalds#11 0x00007f628302e200 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 The reason is that strlen() get a pointer to a memory that has been freed. The string pointer is stored in the structure comm_str, which corresponds to a rb_tree node,when the node is erased, the memory of the string is also freed. In thread__comm_len(),it gets the pointer within the thread->comm_lock critical section, but passed to strlen() outside of the thread->comm_lock critical section, and the perf process_thread may called comm__free() concurrently, cause this segfault problem. The process is as follows: display_thread process_thread -------------- -------------- thread__comm_len -> thread__comm_str # held the comm read lock -> __thread__comm_str(thread) # release the comm read lock thread__delete # held the comm write lock -> comm__free -> comm_str__put(comm->comm_str) -> zfree(&cs->str) # release the comm write lock # The memory of the string pointed to by comm has been free. -> thread->comm_len = strlen(comm); This patch expand the critical section range of thread->comm_lock in thread__comm_len(), to make strlen() called safe. Signed-off-by: Wenyu Liu <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Christian Brauner <[email protected]> Cc: Feilong Lin <[email protected]> Cc: Hewenliang <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Yunfeng Ye <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
I got a report of a msan failure like below: $ sudo perf lock con -ab -- sleep 1 ... ==224416==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x5651160d6c96 in lock_contention_read util/bpf_lock_contention.c:290:8 #1 0x565115f90870 in __cmd_contention builtin-lock.c:1919:3 #2 0x565115f90870 in cmd_lock builtin-lock.c:2385:8 #3 0x565115f03a83 in run_builtin perf.c:330:11 #4 0x565115f03756 in handle_internal_command perf.c:384:8 #5 0x565115f02d53 in run_argv perf.c:428:2 #6 0x565115f02d53 in main perf.c:562:3 #7 0x7f43553bc632 in __libc_start_main #8 0x565115e865a9 in _start It was because the 'key' variable is not initialized. Actually it'd be set by bpf_map_get_next_key() but msan didn't seem to understand it. Let's make msan happy by initializing the variable. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Seen in "perf stat --bpf-counters --for-each-cgroup test" running in a container: libbpf: Failed to bump RLIMIT_MEMLOCK (err = -1), you might need to do it explicitly! libbpf: Error in bpf_object__probe_loading():Operation not permitted(1). Couldn't load trivial BPF program. Make sure your kernel supports BPF (CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big enough value. libbpf: failed to load object 'bperf_cgroup_bpf' libbpf: failed to load BPF skeleton 'bperf_cgroup_bpf': -1 Failed to load cgroup skeleton #0 0x55f28a650981 in list_empty tools/include/linux/list.h:189 #1 0x55f28a6593b4 in evsel__exit util/evsel.c:1518 #2 0x55f28a6596af in evsel__delete util/evsel.c:1544 #3 0x55f28a89d166 in bperf_cgrp__destroy util/bpf_counter_cgroup.c:283 #4 0x55f28a899e9a in bpf_counter__destroy util/bpf_counter.c:816 #5 0x55f28a659455 in evsel__exit util/evsel.c:1520 #6 0x55f28a6596af in evsel__delete util/evsel.c:1544 #7 0x55f28a640d4d in evlist__purge util/evlist.c:148 #8 0x55f28a640ea6 in evlist__delete util/evlist.c:169 #9 0x55f28a4efbf2 in cmd_stat tools/perf/builtin-stat.c:2598 torvalds#10 0x55f28a6050c2 in run_builtin tools/perf/perf.c:330 torvalds#11 0x55f28a605633 in handle_internal_command tools/perf/perf.c:384 torvalds#12 0x55f28a6059fb in run_argv tools/perf/perf.c:428 torvalds#13 0x55f28a6061d3 in main tools/perf/perf.c:562 Signed-off-by: Ian Rogers <[email protected]> Acked-by: Namhyung Kim <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Florian Fischer <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
…us union field If bperf (perf tools that use BPF skels) sets evsel->leader_skel or evsel->follower_skel then it appears that evsel->bpf_skel is set and can trigger the following use-after-free: ==13575==ERROR: AddressSanitizer: heap-use-after-free on address 0x60c000014080 at pc 0x55684b939880 bp 0x7ffdfcf30d70 sp 0x7ffdfcf30d68 READ of size 8 at 0x60c000014080 thread T0 #0 0x55684b93987f in sample_filter_bpf__destroy tools/perf/bpf_skel/sample_filter.skel.h:44:11 #1 0x55684b93987f in perf_bpf_filter__destroy tools/perf/util/bpf-filter.c:155:2 #2 0x55684b98f71e in evsel__exit tools/perf/util/evsel.c:1521:2 #3 0x55684b98a352 in evsel__delete tools/perf/util/evsel.c:1547:2 #4 0x55684b981918 in evlist__purge tools/perf/util/evlist.c:148:3 #5 0x55684b981918 in evlist__delete tools/perf/util/evlist.c:169:2 #6 0x55684b887d60 in cmd_stat tools/perf/builtin-stat.c:2598:2 .. 0x60c000014080 is located 0 bytes inside of 128-byte region [0x60c000014080,0x60c000014100) freed by thread T0 here: #0 0x55684b780e86 in free compiler-rt/lib/asan/asan_malloc_linux.cpp:52:3 #1 0x55684b9462da in bperf_cgroup_bpf__destroy tools/perf/bpf_skel/bperf_cgroup.skel.h:61:2 #2 0x55684b9462da in bperf_cgrp__destroy tools/perf/util/bpf_counter_cgroup.c:282:2 #3 0x55684b944c75 in bpf_counter__destroy tools/perf/util/bpf_counter.c:819:2 #4 0x55684b98f716 in evsel__exit tools/perf/util/evsel.c:1520:2 #5 0x55684b98a352 in evsel__delete tools/perf/util/evsel.c:1547:2 #6 0x55684b981918 in evlist__purge tools/perf/util/evlist.c:148:3 #7 0x55684b981918 in evlist__delete tools/perf/util/evlist.c:169:2 #8 0x55684b887d60 in cmd_stat tools/perf/builtin-stat.c:2598:2 ... previously allocated by thread T0 here: #0 0x55684b781338 in calloc compiler-rt/lib/asan/asan_malloc_linux.cpp:77:3 #1 0x55684b944e25 in bperf_cgroup_bpf__open_opts tools/perf/bpf_skel/bperf_cgroup.skel.h:73:35 #2 0x55684b944e25 in bperf_cgroup_bpf__open tools/perf/bpf_skel/bperf_cgroup.skel.h:97:9 #3 0x55684b944e25 in bperf_load_program tools/perf/util/bpf_counter_cgroup.c:55:9 #4 0x55684b944e25 in bperf_cgrp__load tools/perf/util/bpf_counter_cgroup.c:178:23 #5 0x55684b889289 in __run_perf_stat tools/perf/builtin-stat.c:713:7 #6 0x55684b889289 in run_perf_stat tools/perf/builtin-stat.c:949:8 #7 0x55684b888029 in cmd_stat tools/perf/builtin-stat.c:2537:12 Resolve by clearing 'evsel->bpf_skel' as part of bpf_counter__destroy(). Suggested-by: Namhyung Kim <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
gpi_ch_init() doesn't lock the ctrl_lock mutex, so there is no need to unlock it too. Instead the mutex is handled by the function gpi_alloc_chan_resources(), which properly locks and unlocks the mutex. ===================================== WARNING: bad unlock balance detected! 6.3.0-rc5-00253-g99792582ded1-dirty torvalds#15 Not tainted ------------------------------------- kworker/u16:0/9 is trying to release lock (&gpii->ctrl_lock) at: [<ffffb99d04e1284c>] gpi_alloc_chan_resources+0x108/0x5bc but there are no more locks to release! other info that might help us debug this: 6 locks held by kworker/u16:0/9: #0: ffff575740010938 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x220/0x594 #1: ffff80000809bdd0 (deferred_probe_work){+.+.}-{0:0}, at: process_one_work+0x220/0x594 #2: ffff575740f2a0f8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x38/0x188 #3: ffff57574b5570f8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x38/0x188 #4: ffffb99d06a2f180 (of_dma_lock){+.+.}-{3:3}, at: of_dma_request_slave_channel+0x138/0x280 #5: ffffb99d06a2ee20 (dma_list_mutex){+.+.}-{3:3}, at: dma_get_slave_channel+0x28/0x10c stack backtrace: CPU: 7 PID: 9 Comm: kworker/u16:0 Not tainted 6.3.0-rc5-00253-g99792582ded1-dirty torvalds#15 Hardware name: Google Pixel 3 (DT) Workqueue: events_unbound deferred_probe_work_func Call trace: dump_backtrace+0xa0/0xfc show_stack+0x18/0x24 dump_stack_lvl+0x60/0xac dump_stack+0x18/0x24 print_unlock_imbalance_bug+0x130/0x148 lock_release+0x270/0x300 __mutex_unlock_slowpath+0x48/0x2cc mutex_unlock+0x20/0x2c gpi_alloc_chan_resources+0x108/0x5bc dma_chan_get+0x84/0x188 dma_get_slave_channel+0x5c/0x10c gpi_of_dma_xlate+0x110/0x1a0 of_dma_request_slave_channel+0x174/0x280 dma_request_chan+0x3c/0x2d4 geni_i2c_probe+0x544/0x63c platform_probe+0x68/0xc4 really_probe+0x148/0x2ac __driver_probe_device+0x78/0xe0 driver_probe_device+0x3c/0x160 __device_attach_driver+0xb8/0x138 bus_for_each_drv+0x84/0xe0 __device_attach+0x9c/0x188 device_initial_probe+0x14/0x20 bus_probe_device+0xac/0xb0 device_add+0x60c/0x7d8 of_device_add+0x44/0x60 of_platform_device_create_pdata+0x90/0x124 of_platform_bus_create+0x15c/0x3c8 of_platform_populate+0x58/0xf8 devm_of_platform_populate+0x58/0xbc geni_se_probe+0xf0/0x164 platform_probe+0x68/0xc4 really_probe+0x148/0x2ac __driver_probe_device+0x78/0xe0 driver_probe_device+0x3c/0x160 __device_attach_driver+0xb8/0x138 bus_for_each_drv+0x84/0xe0 __device_attach+0x9c/0x188 device_initial_probe+0x14/0x20 bus_probe_device+0xac/0xb0 deferred_probe_work_func+0x8c/0xc8 process_one_work+0x2bc/0x594 worker_thread+0x228/0x438 kthread+0x108/0x10c ret_from_fork+0x10/0x20 Fixes: 5d0c353 ("dmaengine: qcom: Add GPI dma driver") Signed-off-by: Dmitry Baryshkov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Vinod Koul <[email protected]>
Hayes Wang says: ==================== r8152: fix 2.5G devices v3: For patch #2, modify the comment. v2: For patch #1, Remove inline for fc_pause_on_auto() and fc_pause_off_auto(), and update the commit message. For patch #2, define the magic value for OCP register 0xa424. v1: These patches are used to fix some issues of RTL8156. ==================== Signed-off-by: David S. Miller <[email protected]>
Sai Krishna says: ==================== octeontx2: Miscellaneous fixes This patchset includes following fixes. Patch #1 Fix for the race condition while updating APR table Patch #2 Fix end bit position in NPC scan config Patch #3 Fix depth of CAM, MEM table entries Patch #4 Fix in increase the size of DMAC filter flows Patch #5 Fix driver crash resulting from invalid interface type information retrieved from firmware Patch #6 Fix incorrect mask used while installing filters involving fragmented packets Patch #7 Fixes for NPC field hash extract w.r.t IPV6 hash reduction, IPV6 filed hash configuration. Patch #8 Fix for NPC hardware parser configuration destination address hash, IPV6 endianness issues. Patch #9 Fix for skipping mbox initialization for PFs disabled by firmware. Patch torvalds#10 Fix disabling packet I/O in case of mailbox timeout. Patch torvalds#11 Fix detaching LF resources in case of VF probe fail. ==================== Signed-off-by: David S. Miller <[email protected]>
On the node of an NFS client, some files saved in the mountpoint of the NFS server were copied to another location of the same NFS server. Accidentally, the nfs42_complete_copies() got a NULL-pointer dereference crash with the following syslog: [232064.838881] NFSv4: state recovery failed for open file nfs/pvc-12b5200d-cd0f-46a3-b9f0-af8f4fe0ef64.qcow2, error = -116 [232064.839360] NFSv4: state recovery failed for open file nfs/pvc-12b5200d-cd0f-46a3-b9f0-af8f4fe0ef64.qcow2, error = -116 [232066.588183] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000058 [232066.588586] Mem abort info: [232066.588701] ESR = 0x0000000096000007 [232066.588862] EC = 0x25: DABT (current EL), IL = 32 bits [232066.589084] SET = 0, FnV = 0 [232066.589216] EA = 0, S1PTW = 0 [232066.589340] FSC = 0x07: level 3 translation fault [232066.589559] Data abort info: [232066.589683] ISV = 0, ISS = 0x00000007 [232066.589842] CM = 0, WnR = 0 [232066.589967] user pgtable: 64k pages, 48-bit VAs, pgdp=00002000956ff400 [232066.590231] [0000000000000058] pgd=08001100ae100003, p4d=08001100ae100003, pud=08001100ae100003, pmd=08001100b3c00003, pte=0000000000000000 [232066.590757] Internal error: Oops: 96000007 [#1] SMP [232066.590958] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm vhost_net vhost vhost_iotlb tap tun ipt_rpfilter xt_multiport ip_set_hash_ip ip_set_hash_net xfrm_interface xfrm6_tunnel tunnel4 tunnel6 esp4 ah4 wireguard libcurve25519_generic veth xt_addrtype xt_set nf_conntrack_netlink ip_set_hash_ipportnet ip_set_hash_ipportip ip_set_bitmap_port ip_set_hash_ipport dummy ip_set ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs iptable_filter sch_ingress nfnetlink_cttimeout vport_gre ip_gre ip_tunnel gre vport_geneve geneve vport_vxlan vxlan ip6_udp_tunnel udp_tunnel openvswitch nf_conncount dm_round_robin dm_service_time dm_multipath xt_nat xt_MASQUERADE nft_chain_nat nf_nat xt_mark xt_conntrack xt_comment nft_compat nft_counter nf_tables nfnetlink ocfs2 ocfs2_nodemanager ocfs2_stackglue iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ipmi_ssif nbd overlay 8021q garp mrp bonding tls rfkill sunrpc ext4 mbcache jbd2 [232066.591052] vfat fat cas_cache cas_disk ses enclosure scsi_transport_sas sg acpi_ipmi ipmi_si ipmi_devintf ipmi_msghandler ip_tables vfio_pci vfio_pci_core vfio_virqfd vfio_iommu_type1 vfio dm_mirror dm_region_hash dm_log dm_mod nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter bridge stp llc fuse xfs libcrc32c ast drm_vram_helper qla2xxx drm_kms_helper syscopyarea crct10dif_ce sysfillrect ghash_ce sysimgblt sha2_ce fb_sys_fops cec sha256_arm64 sha1_ce drm_ttm_helper ttm nvme_fc igb sbsa_gwdt nvme_fabrics drm nvme_core i2c_algo_bit i40e scsi_transport_fc megaraid_sas aes_neon_bs [232066.596953] CPU: 6 PID: 4124696 Comm: 10.253.166.125- Kdump: loaded Not tainted 5.15.131-9.cl9_ocfs2.aarch64 #1 [232066.597356] Hardware name: Great Wall .\x93\x8e...RF6260 V5/GWMSSE2GL1T, BIOS T656FBE_V3.0.18 2024-01-06 [232066.597721] pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [232066.598034] pc : nfs4_reclaim_open_state+0x220/0x800 [nfsv4] [232066.598327] lr : nfs4_reclaim_open_state+0x12c/0x800 [nfsv4] [232066.598595] sp : ffff8000f568fc70 [232066.598731] x29: ffff8000f568fc70 x28: 0000000000001000 x27: ffff21003db33000 [232066.599030] x26: ffff800005521ae0 x25: ffff0100f98fa3f0 x24: 0000000000000001 [232066.599319] x23: ffff800009920008 x22: ffff21003db33040 x21: ffff21003db33050 [232066.599628] x20: ffff410172fe9e40 x19: ffff410172fe9e00 x18: 0000000000000000 [232066.599914] x17: 0000000000000000 x16: 0000000000000004 x15: 0000000000000000 [232066.600195] x14: 0000000000000000 x13: ffff800008e685a8 x12: 00000000eac0c6e6 [232066.600498] x11: 0000000000000000 x10: 0000000000000008 x9 : ffff8000054e5828 [232066.600784] x8 : 00000000ffffffbf x7 : 0000000000000001 x6 : 000000000a9eb14a [232066.601062] x5 : 0000000000000000 x4 : ffff70ff8a14a800 x3 : 0000000000000058 [232066.601348] x2 : 0000000000000001 x1 : 54dce46366daa6c6 x0 : 0000000000000000 [232066.601636] Call trace: [232066.601749] nfs4_reclaim_open_state+0x220/0x800 [nfsv4] [232066.601998] nfs4_do_reclaim+0x1b8/0x28c [nfsv4] [232066.602218] nfs4_state_manager+0x928/0x10f0 [nfsv4] [232066.602455] nfs4_run_state_manager+0x78/0x1b0 [nfsv4] [232066.602690] kthread+0x110/0x114 [232066.602830] ret_from_fork+0x10/0x20 [232066.602985] Code: 1400000d f9403f20 f9402e61 91016003 (f9402c00) [232066.603284] SMP: stopping secondary CPUs [232066.606936] Starting crashdump kernel... [232066.607146] Bye! Analysing the vmcore, we know that nfs4_copy_state listed by destination nfs_server->ss_copies was added by the field copies in handle_async_copy(), and we found a waiting copy process with the stack as: PID: 3511963 TASK: ffff710028b47e00 CPU: 0 COMMAND: "cp" #0 [ffff8001116ef740] __switch_to at ffff8000081b92f4 #1 [ffff8001116ef760] __schedule at ffff800008dd0650 #2 [ffff8001116ef7c0] schedule at ffff800008dd0a00 #3 [ffff8001116ef7e0] schedule_timeout at ffff800008dd6aa0 #4 [ffff8001116ef860] __wait_for_common at ffff800008dd166c #5 [ffff8001116ef8e0] wait_for_completion_interruptible at ffff800008dd1898 #6 [ffff8001116ef8f0] handle_async_copy at ffff8000055142f4 [nfsv4] #7 [ffff8001116ef970] _nfs42_proc_copy at ffff8000055147c8 [nfsv4] #8 [ffff8001116efa80] nfs42_proc_copy at ffff800005514cf0 [nfsv4] #9 [ffff8001116efc50] __nfs4_copy_file_range.constprop.0 at ffff8000054ed694 [nfsv4] The NULL-pointer dereference was due to nfs42_complete_copies() listed the nfs_server->ss_copies by the field ss_copies of nfs4_copy_state. So the nfs4_copy_state address ffff0100f98fa3f0 was offset by 0x10 and the data accessed through this pointer was also incorrect. Generally, the ordered list nfs4_state_owner->so_states indicate open(O_RDWR) or open(O_WRITE) states are reclaimed firstly by nfs4_reclaim_open_state(). When destination state reclaim is failed with NFS_STATE_RECOVERY_FAILED and copies are not deleted in nfs_server->ss_copies, the source state may be passed to the nfs42_complete_copies() process earlier, resulting in this crash scene finally. To solve this issue, we add a list_head nfs_server->ss_src_copies for a server-to-server copy specially. Fixes: 0e65a32 ("NFS: handle source server reboot") Signed-off-by: Yanjun Zhang <[email protected]> Reviewed-by: Trond Myklebust <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
Fix a kernel panic in the br_netfilter module when sending untagged traffic via a VxLAN device. This happens during the check for fragmentation in br_nf_dev_queue_xmit. It is dependent on: 1) the br_netfilter module being loaded; 2) net.bridge.bridge-nf-call-iptables set to 1; 3) a bridge with a VxLAN (single-vxlan-device) netdevice as a bridge port; 4) untagged frames with size higher than the VxLAN MTU forwarded/flooded When forwarding the untagged packet to the VxLAN bridge port, before the netfilter hooks are called, br_handle_egress_vlan_tunnel is called and changes the skb_dst to the tunnel dst. The tunnel_dst is a metadata type of dst, i.e., skb_valid_dst(skb) is false, and metadata->dst.dev is NULL. Then in the br_netfilter hooks, in br_nf_dev_queue_xmit, there's a check for frames that needs to be fragmented: frames with higher MTU than the VxLAN device end up calling br_nf_ip_fragment, which in turns call ip_skb_dst_mtu. The ip_dst_mtu tries to use the skb_dst(skb) as if it was a valid dst with valid dst->dev, thus the crash. This case was never supported in the first place, so drop the packet instead. PING 10.0.0.2 (10.0.0.2) from 0.0.0.0 h1-eth0: 2000(2028) bytes of data. [ 176.291791] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000110 [ 176.292101] Mem abort info: [ 176.292184] ESR = 0x0000000096000004 [ 176.292322] EC = 0x25: DABT (current EL), IL = 32 bits [ 176.292530] SET = 0, FnV = 0 [ 176.292709] EA = 0, S1PTW = 0 [ 176.292862] FSC = 0x04: level 0 translation fault [ 176.293013] Data abort info: [ 176.293104] ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 [ 176.293488] CM = 0, WnR = 0, TnD = 0, TagAccess = 0 [ 176.293787] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 [ 176.293995] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000043ef5000 [ 176.294166] [0000000000000110] pgd=0000000000000000, p4d=0000000000000000 [ 176.294827] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP [ 176.295252] Modules linked in: vxlan ip6_udp_tunnel udp_tunnel veth br_netfilter bridge stp llc ipv6 crct10dif_ce [ 176.295923] CPU: 0 PID: 188 Comm: ping Not tainted 6.8.0-rc3-g5b3fbd61b9d1 #2 [ 176.296314] Hardware name: linux,dummy-virt (DT) [ 176.296535] pstate: 80000005 (Nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 176.296808] pc : br_nf_dev_queue_xmit+0x390/0x4ec [br_netfilter] [ 176.297382] lr : br_nf_dev_queue_xmit+0x2ac/0x4ec [br_netfilter] [ 176.297636] sp : ffff800080003630 [ 176.297743] x29: ffff800080003630 x28: 0000000000000008 x27: ffff6828c49ad9f8 [ 176.298093] x26: ffff6828c49ad000 x25: 0000000000000000 x24: 00000000000003e8 [ 176.298430] x23: 0000000000000000 x22: ffff6828c4960b40 x21: ffff6828c3b16d28 [ 176.298652] x20: ffff6828c3167048 x19: ffff6828c3b16d00 x18: 0000000000000014 [ 176.298926] x17: ffffb0476322f000 x16: ffffb7e164023730 x15: 0000000095744632 [ 176.299296] x14: ffff6828c3f1c880 x13: 0000000000000002 x12: ffffb7e137926a70 [ 176.299574] x11: 0000000000000001 x10: ffff6828c3f1c898 x9 : 0000000000000000 [ 176.300049] x8 : ffff6828c49bf070 x7 : 0008460f18d5f20e x6 : f20e0100bebafeca [ 176.300302] x5 : ffff6828c7f918fe x4 : ffff6828c49bf070 x3 : 0000000000000000 [ 176.300586] x2 : 0000000000000000 x1 : ffff6828c3c7ad00 x0 : ffff6828c7f918f0 [ 176.300889] Call trace: [ 176.301123] br_nf_dev_queue_xmit+0x390/0x4ec [br_netfilter] [ 176.301411] br_nf_post_routing+0x2a8/0x3e4 [br_netfilter] [ 176.301703] nf_hook_slow+0x48/0x124 [ 176.302060] br_forward_finish+0xc8/0xe8 [bridge] [ 176.302371] br_nf_hook_thresh+0x124/0x134 [br_netfilter] [ 176.302605] br_nf_forward_finish+0x118/0x22c [br_netfilter] [ 176.302824] br_nf_forward_ip.part.0+0x264/0x290 [br_netfilter] [ 176.303136] br_nf_forward+0x2b8/0x4e0 [br_netfilter] [ 176.303359] nf_hook_slow+0x48/0x124 [ 176.303803] __br_forward+0xc4/0x194 [bridge] [ 176.304013] br_flood+0xd4/0x168 [bridge] [ 176.304300] br_handle_frame_finish+0x1d4/0x5c4 [bridge] [ 176.304536] br_nf_hook_thresh+0x124/0x134 [br_netfilter] [ 176.304978] br_nf_pre_routing_finish+0x29c/0x494 [br_netfilter] [ 176.305188] br_nf_pre_routing+0x250/0x524 [br_netfilter] [ 176.305428] br_handle_frame+0x244/0x3cc [bridge] [ 176.305695] __netif_receive_skb_core.constprop.0+0x33c/0xecc [ 176.306080] __netif_receive_skb_one_core+0x40/0x8c [ 176.306197] __netif_receive_skb+0x18/0x64 [ 176.306369] process_backlog+0x80/0x124 [ 176.306540] __napi_poll+0x38/0x17c [ 176.306636] net_rx_action+0x124/0x26c [ 176.306758] __do_softirq+0x100/0x26c [ 176.307051] ____do_softirq+0x10/0x1c [ 176.307162] call_on_irq_stack+0x24/0x4c [ 176.307289] do_softirq_own_stack+0x1c/0x2c [ 176.307396] do_softirq+0x54/0x6c [ 176.307485] __local_bh_enable_ip+0x8c/0x98 [ 176.307637] __dev_queue_xmit+0x22c/0xd28 [ 176.307775] neigh_resolve_output+0xf4/0x1a0 [ 176.308018] ip_finish_output2+0x1c8/0x628 [ 176.308137] ip_do_fragment+0x5b4/0x658 [ 176.308279] ip_fragment.constprop.0+0x48/0xec [ 176.308420] __ip_finish_output+0xa4/0x254 [ 176.308593] ip_finish_output+0x34/0x130 [ 176.308814] ip_output+0x6c/0x108 [ 176.308929] ip_send_skb+0x50/0xf0 [ 176.309095] ip_push_pending_frames+0x30/0x54 [ 176.309254] raw_sendmsg+0x758/0xaec [ 176.309568] inet_sendmsg+0x44/0x70 [ 176.309667] __sys_sendto+0x110/0x178 [ 176.309758] __arm64_sys_sendto+0x28/0x38 [ 176.309918] invoke_syscall+0x48/0x110 [ 176.310211] el0_svc_common.constprop.0+0x40/0xe0 [ 176.310353] do_el0_svc+0x1c/0x28 [ 176.310434] el0_svc+0x34/0xb4 [ 176.310551] el0t_64_sync_handler+0x120/0x12c [ 176.310690] el0t_64_sync+0x190/0x194 [ 176.311066] Code: f9402e61 79402aa2 927ff821 f9400023 (f9408860) [ 176.315743] ---[ end trace 0000000000000000 ]--- [ 176.316060] Kernel panic - not syncing: Oops: Fatal exception in interrupt [ 176.316371] Kernel Offset: 0x37e0e3000000 from 0xffff800080000000 [ 176.316564] PHYS_OFFSET: 0xffff97d780000000 [ 176.316782] CPU features: 0x0,88000203,3c020000,0100421b [ 176.317210] Memory Limit: none [ 176.317527] ---[ end Kernel panic - not syncing: Oops: Fatal Exception in interrupt ]---\ Fixes: 11538d0 ("bridge: vlan dst_metadata hooks in ingress and egress paths") Reviewed-by: Ido Schimmel <[email protected]> Signed-off-by: Andy Roulin <[email protected]> Acked-by: Nikolay Aleksandrov <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
Andy Roulin says: ==================== netfilter: br_netfilter: fix panic with metadata_dst skb There's a kernel panic possible in the br_netfilter module when sending untagged traffic via a VxLAN device. Traceback is included below. This happens during the check for fragmentation in br_nf_dev_queue_xmit if the MTU on the VxLAN device is not big enough. It is dependent on: 1) the br_netfilter module being loaded; 2) net.bridge.bridge-nf-call-iptables set to 1; 3) a bridge with a VxLAN (single-vxlan-device) netdevice as a bridge port; 4) untagged frames with size higher than the VxLAN MTU forwarded/flooded This case was never supported in the first place, so the first patch drops such packets. A regression selftest is added as part of the second patch. PING 10.0.0.2 (10.0.0.2) from 0.0.0.0 h1-eth0: 2000(2028) bytes of data. [ 176.291791] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000110 [ 176.292101] Mem abort info: [ 176.292184] ESR = 0x0000000096000004 [ 176.292322] EC = 0x25: DABT (current EL), IL = 32 bits [ 176.292530] SET = 0, FnV = 0 [ 176.292709] EA = 0, S1PTW = 0 [ 176.292862] FSC = 0x04: level 0 translation fault [ 176.293013] Data abort info: [ 176.293104] ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 [ 176.293488] CM = 0, WnR = 0, TnD = 0, TagAccess = 0 [ 176.293787] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 [ 176.293995] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000043ef5000 [ 176.294166] [0000000000000110] pgd=0000000000000000, p4d=0000000000000000 [ 176.294827] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP [ 176.295252] Modules linked in: vxlan ip6_udp_tunnel udp_tunnel veth br_netfilter bridge stp llc ipv6 crct10dif_ce [ 176.295923] CPU: 0 PID: 188 Comm: ping Not tainted 6.8.0-rc3-g5b3fbd61b9d1 #2 [ 176.296314] Hardware name: linux,dummy-virt (DT) [ 176.296535] pstate: 80000005 (Nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 176.296808] pc : br_nf_dev_queue_xmit+0x390/0x4ec [br_netfilter] [ 176.297382] lr : br_nf_dev_queue_xmit+0x2ac/0x4ec [br_netfilter] [ 176.297636] sp : ffff800080003630 [ 176.297743] x29: ffff800080003630 x28: 0000000000000008 x27: ffff6828c49ad9f8 [ 176.298093] x26: ffff6828c49ad000 x25: 0000000000000000 x24: 00000000000003e8 [ 176.298430] x23: 0000000000000000 x22: ffff6828c4960b40 x21: ffff6828c3b16d28 [ 176.298652] x20: ffff6828c3167048 x19: ffff6828c3b16d00 x18: 0000000000000014 [ 176.298926] x17: ffffb0476322f000 x16: ffffb7e164023730 x15: 0000000095744632 [ 176.299296] x14: ffff6828c3f1c880 x13: 0000000000000002 x12: ffffb7e137926a70 [ 176.299574] x11: 0000000000000001 x10: ffff6828c3f1c898 x9 : 0000000000000000 [ 176.300049] x8 : ffff6828c49bf070 x7 : 0008460f18d5f20e x6 : f20e0100bebafeca [ 176.300302] x5 : ffff6828c7f918fe x4 : ffff6828c49bf070 x3 : 0000000000000000 [ 176.300586] x2 : 0000000000000000 x1 : ffff6828c3c7ad00 x0 : ffff6828c7f918f0 [ 176.300889] Call trace: [ 176.301123] br_nf_dev_queue_xmit+0x390/0x4ec [br_netfilter] [ 176.301411] br_nf_post_routing+0x2a8/0x3e4 [br_netfilter] [ 176.301703] nf_hook_slow+0x48/0x124 [ 176.302060] br_forward_finish+0xc8/0xe8 [bridge] [ 176.302371] br_nf_hook_thresh+0x124/0x134 [br_netfilter] [ 176.302605] br_nf_forward_finish+0x118/0x22c [br_netfilter] [ 176.302824] br_nf_forward_ip.part.0+0x264/0x290 [br_netfilter] [ 176.303136] br_nf_forward+0x2b8/0x4e0 [br_netfilter] [ 176.303359] nf_hook_slow+0x48/0x124 [ 176.303803] __br_forward+0xc4/0x194 [bridge] [ 176.304013] br_flood+0xd4/0x168 [bridge] [ 176.304300] br_handle_frame_finish+0x1d4/0x5c4 [bridge] [ 176.304536] br_nf_hook_thresh+0x124/0x134 [br_netfilter] [ 176.304978] br_nf_pre_routing_finish+0x29c/0x494 [br_netfilter] [ 176.305188] br_nf_pre_routing+0x250/0x524 [br_netfilter] [ 176.305428] br_handle_frame+0x244/0x3cc [bridge] [ 176.305695] __netif_receive_skb_core.constprop.0+0x33c/0xecc [ 176.306080] __netif_receive_skb_one_core+0x40/0x8c [ 176.306197] __netif_receive_skb+0x18/0x64 [ 176.306369] process_backlog+0x80/0x124 [ 176.306540] __napi_poll+0x38/0x17c [ 176.306636] net_rx_action+0x124/0x26c [ 176.306758] __do_softirq+0x100/0x26c [ 176.307051] ____do_softirq+0x10/0x1c [ 176.307162] call_on_irq_stack+0x24/0x4c [ 176.307289] do_softirq_own_stack+0x1c/0x2c [ 176.307396] do_softirq+0x54/0x6c [ 176.307485] __local_bh_enable_ip+0x8c/0x98 [ 176.307637] __dev_queue_xmit+0x22c/0xd28 [ 176.307775] neigh_resolve_output+0xf4/0x1a0 [ 176.308018] ip_finish_output2+0x1c8/0x628 [ 176.308137] ip_do_fragment+0x5b4/0x658 [ 176.308279] ip_fragment.constprop.0+0x48/0xec [ 176.308420] __ip_finish_output+0xa4/0x254 [ 176.308593] ip_finish_output+0x34/0x130 [ 176.308814] ip_output+0x6c/0x108 [ 176.308929] ip_send_skb+0x50/0xf0 [ 176.309095] ip_push_pending_frames+0x30/0x54 [ 176.309254] raw_sendmsg+0x758/0xaec [ 176.309568] inet_sendmsg+0x44/0x70 [ 176.309667] __sys_sendto+0x110/0x178 [ 176.309758] __arm64_sys_sendto+0x28/0x38 [ 176.309918] invoke_syscall+0x48/0x110 [ 176.310211] el0_svc_common.constprop.0+0x40/0xe0 [ 176.310353] do_el0_svc+0x1c/0x28 [ 176.310434] el0_svc+0x34/0xb4 [ 176.310551] el0t_64_sync_handler+0x120/0x12c [ 176.310690] el0t_64_sync+0x190/0x194 [ 176.311066] Code: f9402e61 79402aa2 927ff821 f9400023 (f9408860) [ 176.315743] ---[ end trace 0000000000000000 ]--- [ 176.316060] Kernel panic - not syncing: Oops: Fatal exception in interrupt [ 176.316371] Kernel Offset: 0x37e0e3000000 from 0xffff800080000000 [ 176.316564] PHYS_OFFSET: 0xffff97d780000000 [ 176.316782] CPU features: 0x0,88000203,3c020000,0100421b [ 176.317210] Memory Limit: none [ 176.317527] ---[ end Kernel panic - not syncing: Oops: Fatal Exception in interrupt ]---\ ==================== Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
Hou Tao says: ==================== Check the remaining info_cnt before repeating btf fields From: Hou Tao <[email protected]> Hi, The patch set adds the missed check again info_cnt when flattening the array of nested struct. The problem was spotted when developing dynptr key support for hash map. Patch #1 adds the missed check and patch #2 adds three success test cases and one failure test case for the problem. Comments are always welcome. Change Log: v2: * patch #1: check info_cnt in btf_repeat_fields() * patch #2: use a hard-coded number instead of BTF_FIELDS_MAX, because BTF_FIELDS_MAX is not always available in vmlinux.h (e.g., for llvm 17/18) v1: https://lore.kernel.org/bpf/[email protected]/T/#t ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
Syzkaller reported a lockdep splat: ============================================ WARNING: possible recursive locking detected 6.11.0-rc6-syzkaller-00019-g67784a74e258 #0 Not tainted -------------------------------------------- syz-executor364/5113 is trying to acquire lock: ffff8880449f1958 (k-slock-AF_INET){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline] ffff8880449f1958 (k-slock-AF_INET){+.-.}-{2:2}, at: sk_clone_lock+0x2cd/0xf40 net/core/sock.c:2328 but task is already holding lock: ffff88803fe3cb58 (k-slock-AF_INET){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline] ffff88803fe3cb58 (k-slock-AF_INET){+.-.}-{2:2}, at: sk_clone_lock+0x2cd/0xf40 net/core/sock.c:2328 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(k-slock-AF_INET); lock(k-slock-AF_INET); *** DEADLOCK *** May be due to missing lock nesting notation 7 locks held by syz-executor364/5113: #0: ffff8880449f0e18 (sk_lock-AF_INET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1607 [inline] #0: ffff8880449f0e18 (sk_lock-AF_INET){+.+.}-{0:0}, at: mptcp_sendmsg+0x153/0x1b10 net/mptcp/protocol.c:1806 #1: ffff88803fe39ad8 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1607 [inline] #1: ffff88803fe39ad8 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: mptcp_sendmsg_fastopen+0x11f/0x530 net/mptcp/protocol.c:1727 #2: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline] #2: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline] #2: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: __ip_queue_xmit+0x5f/0x1b80 net/ipv4/ip_output.c:470 #3: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline] #3: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline] #3: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: ip_finish_output2+0x45f/0x1390 net/ipv4/ip_output.c:228 #4: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: local_lock_acquire include/linux/local_lock_internal.h:29 [inline] #4: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: process_backlog+0x33b/0x15b0 net/core/dev.c:6104 #5: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline] #5: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline] #5: ffffffff8e938320 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x230/0x5f0 net/ipv4/ip_input.c:232 #6: ffff88803fe3cb58 (k-slock-AF_INET){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline] #6: ffff88803fe3cb58 (k-slock-AF_INET){+.-.}-{2:2}, at: sk_clone_lock+0x2cd/0xf40 net/core/sock.c:2328 stack backtrace: CPU: 0 UID: 0 PID: 5113 Comm: syz-executor364 Not tainted 6.11.0-rc6-syzkaller-00019-g67784a74e258 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:93 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119 check_deadlock kernel/locking/lockdep.c:3061 [inline] validate_chain+0x15d3/0x5900 kernel/locking/lockdep.c:3855 __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154 spin_lock include/linux/spinlock.h:351 [inline] sk_clone_lock+0x2cd/0xf40 net/core/sock.c:2328 mptcp_sk_clone_init+0x32/0x13c0 net/mptcp/protocol.c:3279 subflow_syn_recv_sock+0x931/0x1920 net/mptcp/subflow.c:874 tcp_check_req+0xfe4/0x1a20 net/ipv4/tcp_minisocks.c:853 tcp_v4_rcv+0x1c3e/0x37f0 net/ipv4/tcp_ipv4.c:2267 ip_protocol_deliver_rcu+0x22e/0x440 net/ipv4/ip_input.c:205 ip_local_deliver_finish+0x341/0x5f0 net/ipv4/ip_input.c:233 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314 __netif_receive_skb_one_core net/core/dev.c:5661 [inline] __netif_receive_skb+0x2bf/0x650 net/core/dev.c:5775 process_backlog+0x662/0x15b0 net/core/dev.c:6108 __napi_poll+0xcb/0x490 net/core/dev.c:6772 napi_poll net/core/dev.c:6841 [inline] net_rx_action+0x89b/0x1240 net/core/dev.c:6963 handle_softirqs+0x2c4/0x970 kernel/softirq.c:554 do_softirq+0x11b/0x1e0 kernel/softirq.c:455 </IRQ> <TASK> __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:382 local_bh_enable include/linux/bottom_half.h:33 [inline] rcu_read_unlock_bh include/linux/rcupdate.h:908 [inline] __dev_queue_xmit+0x1763/0x3e90 net/core/dev.c:4450 dev_queue_xmit include/linux/netdevice.h:3105 [inline] neigh_hh_output include/net/neighbour.h:526 [inline] neigh_output include/net/neighbour.h:540 [inline] ip_finish_output2+0xd41/0x1390 net/ipv4/ip_output.c:235 ip_local_out net/ipv4/ip_output.c:129 [inline] __ip_queue_xmit+0x118c/0x1b80 net/ipv4/ip_output.c:535 __tcp_transmit_skb+0x2544/0x3b30 net/ipv4/tcp_output.c:1466 tcp_rcv_synsent_state_process net/ipv4/tcp_input.c:6542 [inline] tcp_rcv_state_process+0x2c32/0x4570 net/ipv4/tcp_input.c:6729 tcp_v4_do_rcv+0x77d/0xc70 net/ipv4/tcp_ipv4.c:1934 sk_backlog_rcv include/net/sock.h:1111 [inline] __release_sock+0x214/0x350 net/core/sock.c:3004 release_sock+0x61/0x1f0 net/core/sock.c:3558 mptcp_sendmsg_fastopen+0x1ad/0x530 net/mptcp/protocol.c:1733 mptcp_sendmsg+0x1884/0x1b10 net/mptcp/protocol.c:1812 sock_sendmsg_nosec net/socket.c:730 [inline] __sock_sendmsg+0x1a6/0x270 net/socket.c:745 ____sys_sendmsg+0x525/0x7d0 net/socket.c:2597 ___sys_sendmsg net/socket.c:2651 [inline] __sys_sendmmsg+0x3b2/0x740 net/socket.c:2737 __do_sys_sendmmsg net/socket.c:2766 [inline] __se_sys_sendmmsg net/socket.c:2763 [inline] __x64_sys_sendmmsg+0xa0/0xb0 net/socket.c:2763 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f04fb13a6b9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 01 1a 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffd651f42d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f04fb13a6b9 RDX: 0000000000000001 RSI: 0000000020000d00 RDI: 0000000000000004 RBP: 00007ffd651f4310 R08: 0000000000000001 R09: 0000000000000001 R10: 0000000020000080 R11: 0000000000000246 R12: 00000000000f4240 R13: 00007f04fb187449 R14: 00007ffd651f42f4 R15: 00007ffd651f4300 </TASK> As noted by Cong Wang, the splat is false positive, but the code path leading to the report is an unexpected one: a client is attempting an MPC handshake towards the in-kernel listener created by the in-kernel PM for a port based signal endpoint. Such connection will be never accepted; many of them can make the listener queue full and preventing the creation of MPJ subflow via such listener - its intended role. Explicitly detect this scenario at initial-syn time and drop the incoming MPC request. Fixes: 1729cf1 ("mptcp: create the listening socket for new port") Cc: [email protected] Reported-by: [email protected] Closes: https://syzkaller.appspot.com/bug?extid=f4aacdfef2c6a6529c3e Cc: Cong Wang <[email protected]> Signed-off-by: Paolo Abeni <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Reviewed-by: Mat Martineau <[email protected]> Signed-off-by: Matthieu Baerts (NGI0) <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
…/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.12, take #2 - Fix the guest view of the ID registers, making the relevant fields writable from userspace (affecting ID_AA64DFR0_EL1 and ID_AA64PFR1_EL1) - Correcly expose S1PIE to guests, fixing a regression introduced in 6.12-rc1 with the S1POE support - Fix the recycling of stage-2 shadow MMUs by tracking the context (are we allowed to block or not) as well as the recycling state - Address a couple of issues with the vgic when userspace misconfigures the emulation, resulting in various splats. Headaches courtesy of our Syzkaller friends
Currently, when configuring TMU (Time Management Unit) mode of a given router, we take into account only its own TMU requirements ignoring other routers in the domain. This is problematic if the router we are configuring has lower TMU requirements than what is already configured in the domain. In the scenario below, we have a host router with two USB4 ports: A and B. Port A connected to device router #1 (which supports CL states) and existing DisplayPort tunnel, thus, the TMU mode is HiFi uni-directional. 1. Initial topology [Host] A/ / [Device #1] / Monitor 2. Plug in device #2 (that supports CL states) to downstream port B of the host router [Host] A/ B\ / \ [Device #1] [Device #2] / Monitor The TMU mode on port B and port A will be configured to LowRes which is not what we want and will cause monitor to start flickering. To address this we first scan the domain and search for any router configured to HiFi uni-directional mode, and if found, configure TMU mode of the given router to HiFi uni-directional as well. Cc: [email protected] Signed-off-by: Gil Fine <[email protected]> Signed-off-by: Mika Westerberg <[email protected]>
Fix possible use-after-free in 'taprio_dump()' by adding RCU read-side critical section there. Never seen on x86 but found on a KASAN-enabled arm64 system when investigating https://syzkaller.appspot.com/bug?extid=b65e0af58423fc8a73aa: [T15862] BUG: KASAN: slab-use-after-free in taprio_dump+0xa0c/0xbb0 [T15862] Read of size 4 at addr ffff0000d4bb88f8 by task repro/15862 [T15862] [T15862] CPU: 0 UID: 0 PID: 15862 Comm: repro Not tainted 6.11.0-rc1-00293-gdefaf1a2113a-dirty #2 [T15862] Hardware name: QEMU QEMU Virtual Machine, BIOS edk2-20240524-5.fc40 05/24/2024 [T15862] Call trace: [T15862] dump_backtrace+0x20c/0x220 [T15862] show_stack+0x2c/0x40 [T15862] dump_stack_lvl+0xf8/0x174 [T15862] print_report+0x170/0x4d8 [T15862] kasan_report+0xb8/0x1d4 [T15862] __asan_report_load4_noabort+0x20/0x2c [T15862] taprio_dump+0xa0c/0xbb0 [T15862] tc_fill_qdisc+0x540/0x1020 [T15862] qdisc_notify.isra.0+0x330/0x3a0 [T15862] tc_modify_qdisc+0x7b8/0x1838 [T15862] rtnetlink_rcv_msg+0x3c8/0xc20 [T15862] netlink_rcv_skb+0x1f8/0x3d4 [T15862] rtnetlink_rcv+0x28/0x40 [T15862] netlink_unicast+0x51c/0x790 [T15862] netlink_sendmsg+0x79c/0xc20 [T15862] __sock_sendmsg+0xe0/0x1a0 [T15862] ____sys_sendmsg+0x6c0/0x840 [T15862] ___sys_sendmsg+0x1ac/0x1f0 [T15862] __sys_sendmsg+0x110/0x1d0 [T15862] __arm64_sys_sendmsg+0x74/0xb0 [T15862] invoke_syscall+0x88/0x2e0 [T15862] el0_svc_common.constprop.0+0xe4/0x2a0 [T15862] do_el0_svc+0x44/0x60 [T15862] el0_svc+0x50/0x184 [T15862] el0t_64_sync_handler+0x120/0x12c [T15862] el0t_64_sync+0x190/0x194 [T15862] [T15862] Allocated by task 15857: [T15862] kasan_save_stack+0x3c/0x70 [T15862] kasan_save_track+0x20/0x3c [T15862] kasan_save_alloc_info+0x40/0x60 [T15862] __kasan_kmalloc+0xd4/0xe0 [T15862] __kmalloc_cache_noprof+0x194/0x334 [T15862] taprio_change+0x45c/0x2fe0 [T15862] tc_modify_qdisc+0x6a8/0x1838 [T15862] rtnetlink_rcv_msg+0x3c8/0xc20 [T15862] netlink_rcv_skb+0x1f8/0x3d4 [T15862] rtnetlink_rcv+0x28/0x40 [T15862] netlink_unicast+0x51c/0x790 [T15862] netlink_sendmsg+0x79c/0xc20 [T15862] __sock_sendmsg+0xe0/0x1a0 [T15862] ____sys_sendmsg+0x6c0/0x840 [T15862] ___sys_sendmsg+0x1ac/0x1f0 [T15862] __sys_sendmsg+0x110/0x1d0 [T15862] __arm64_sys_sendmsg+0x74/0xb0 [T15862] invoke_syscall+0x88/0x2e0 [T15862] el0_svc_common.constprop.0+0xe4/0x2a0 [T15862] do_el0_svc+0x44/0x60 [T15862] el0_svc+0x50/0x184 [T15862] el0t_64_sync_handler+0x120/0x12c [T15862] el0t_64_sync+0x190/0x194 [T15862] [T15862] Freed by task 6192: [T15862] kasan_save_stack+0x3c/0x70 [T15862] kasan_save_track+0x20/0x3c [T15862] kasan_save_free_info+0x4c/0x80 [T15862] poison_slab_object+0x110/0x160 [T15862] __kasan_slab_free+0x3c/0x74 [T15862] kfree+0x134/0x3c0 [T15862] taprio_free_sched_cb+0x18c/0x220 [T15862] rcu_core+0x920/0x1b7c [T15862] rcu_core_si+0x10/0x1c [T15862] handle_softirqs+0x2e8/0xd64 [T15862] __do_softirq+0x14/0x20 Fixes: 18cdd2f ("net/sched: taprio: taprio_dump and taprio_change are protected by rtnl_mutex") Acked-by: Vinicius Costa Gomes <[email protected]> Signed-off-by: Dmitry Antipov <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Paolo Abeni <[email protected]>
Running rcutorture scenario TREE05, the below warning is triggered. [ 32.604594] WARNING: suspicious RCU usage [ 32.605928] 6.11.0-rc5-00040-g4ba4f1afb6a9 #55238 Not tainted [ 32.607812] ----------------------------- [ 32.609140] kernel/events/core.c:13946 RCU-list traversed in non-reader section!! [ 32.611595] other info that might help us debug this: [ 32.614247] rcu_scheduler_active = 2, debug_locks = 1 [ 32.616392] 3 locks held by cpuhp/4/35: [ 32.617687] #0: ffffffffb666a650 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x4e/0x200 [ 32.620563] #1: ffffffffb666cd20 (cpuhp_state-down){+.+.}-{0:0}, at: cpuhp_thread_fun+0x4e/0x200 [ 32.623412] #2: ffffffffb677c288 (pmus_lock){+.+.}-{3:3}, at: perf_event_exit_cpu_context+0x32/0x2f0 In perf_event_clear_cpumask(), uses list_for_each_entry_rcu() without an obvious RCU read-side critical section. Either pmus_srcu or pmus_lock is good enough to protect the pmus list. In the current context, pmus_lock is already held. The list_for_each_entry_rcu() is not required. Fixes: 4ba4f1a ("perf: Generic hotplug support for a PMU with a scope") Closes: https://lore.kernel.org/lkml/2b66dff8-b827-494b-b151-1ad8d56f13e6@paulmck-laptop/ Closes: https://lore.kernel.org/oe-lkp/[email protected] Reported-by: "Paul E. McKenney" <[email protected]> Reported-by: kernel test robot <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: "Paul E. McKenney" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
Hou Tao says: ==================== Add the missing BPF_LINK_TYPE invocation for sockmap From: Hou Tao <[email protected]> Hi, The tiny patch set fixes the out-of-bound read problem when reading the fdinfo of sock map link fd. And in order to spot such omission early for the newly-added link type in the future, it also checks the validity of the link->type and adds a WARN_ONCE() for missed invocation. Please see individual patches for more details. And comments are always welcome. v3: * patch #2: check and warn the validity of link->type instead of adding a static assertion for bpf_link_type_strs array. v2: http://lore.kernel.org/bpf/[email protected] ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Andrii Nakryiko <[email protected]>
generic/077 on x86_32 CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y with highmem, on huge=always tmpfs, issues a warning and then hangs (interruptibly): WARNING: CPU: 5 PID: 3517 at mm/highmem.c:622 kunmap_local_indexed+0x62/0xc9 CPU: 5 UID: 0 PID: 3517 Comm: cp Not tainted 6.12.0-rc4 #2 ... copy_page_from_iter_atomic+0xa6/0x5ec generic_perform_write+0xf6/0x1b4 shmem_file_write_iter+0x54/0x67 Fix copy_page_from_iter_atomic() by limiting it in that case (include/linux/skbuff.h skb_frag_must_loop() does similar). But going forward, perhaps CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP is too surprising, has outlived its usefulness, and should just be removed? Fixes: 908a1ad ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()") Signed-off-by: Hugh Dickins <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Christoph Hellwig <[email protected]> Cc: [email protected] Signed-off-by: Christian Brauner <[email protected]>
Hou Tao says: ==================== The patch set fixes several issues in bits iterator. Patch #1 fixes the kmemleak problem of bits iterator. Patch #2~#3 fix the overflow problem of nr_bits. Patch #4 fixes the potential stack corruption when bits iterator is used on 32-bit host. Patch #5 adds more test cases for bits iterator. Please see the individual patches for more details. And comments are always welcome. --- v4: * patch #1: add ack from Yafang * patch #3: revert code-churn like changes: (1) compute nr_bytes and nr_bits before the check of nr_words. (2) use nr_bits == 64 to check for single u64, preventing build warning on 32-bit hosts. * patch #4: use "BITS_PER_LONG == 32" instead of "!defined(CONFIG_64BIT)" v3: https://lore.kernel.org/bpf/[email protected]/T/#t * split the bits-iterator related patches from "Misc fixes for bpf" patch set * patch #1: use "!nr_bits || bits >= nr_bits" to stop the iteration * patch #2: add a new helper for the overflow problem * patch #3: decrease the limitation from 512 to 511 and check whether nr_bytes is too large for bpf memory allocator explicitly * patch #5: add two more test cases for bit iterator v2: http://lore.kernel.org/bpf/[email protected] ==================== Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
Petr Machata says: ==================== mlxsw: Fixes In this patchset: - Tx header should be pushed for each packet which is transmitted via Spectrum ASICs. Patch #1 adds a missing call to skb_cow_head() to make sure that there is both enough room to push the Tx header and that the SKB header is not cloned and can be modified. - Commit b5b60bb ("mlxsw: pci: Use page pool for Rx buffers allocation") converted mlxsw to use page pool for Rx buffers allocation. Sync for CPU and for device should be done for Rx pages. In patches #2 and #3, add the missing calls to sync pages for, respectively, CPU and the device. - Patch #4 then fixes a bug to IPv6 GRE forwarding offload. Patch #5 adds a generic forwarding test that fails with mlxsw ports prior to the fix. ==================== Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
When we compile and load lib/slub_kunit.c,it will cause a panic. The root cause is that __kmalloc_cache_noprof was directly called instead of kmem_cache_alloc,which resulted in no alloc_tag being allocated.This caused current->alloc_tag to be null,leading to a null pointer dereference in alloc_tag_ref_set. Despite the fact that my colleague Pei Xiao will later fix the code in slub_kunit.c,we still need fix null pointer check logic for ref and tag to avoid panic caused by a null pointer dereference. Here is the log for the panic: [ 74.779373][ T2158] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020 [ 74.780130][ T2158] Mem abort info: [ 74.780406][ T2158] ESR = 0x0000000096000004 [ 74.780756][ T2158] EC = 0x25: DABT (current EL), IL = 32 bits [ 74.781225][ T2158] SET = 0, FnV = 0 [ 74.781529][ T2158] EA = 0, S1PTW = 0 [ 74.781836][ T2158] FSC = 0x04: level 0 translation fault [ 74.782288][ T2158] Data abort info: [ 74.782577][ T2158] ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 [ 74.783068][ T2158] CM = 0, WnR = 0, TnD = 0, TagAccess = 0 [ 74.783533][ T2158] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 [ 74.784010][ T2158] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000105f34000 [ 74.784586][ T2158] [0000000000000020] pgd=0000000000000000, p4d=0000000000000000 [ 74.785293][ T2158] Internal error: Oops: 0000000096000004 [#1] SMP [ 74.785805][ T2158] Modules linked in: slub_kunit kunit ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ebtable_broute ip6table_nat ip6table_mangle 4 [ 74.790661][ T2158] CPU: 0 UID: 0 PID: 2158 Comm: kunit_try_catch Kdump: loaded Tainted: G W N 6.12.0-rc3+ #2 [ 74.791535][ T2158] Tainted: [W]=WARN, [N]=TEST [ 74.791889][ T2158] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 74.792479][ T2158] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 74.793101][ T2158] pc : alloc_tagging_slab_alloc_hook+0x120/0x270 [ 74.793607][ T2158] lr : alloc_tagging_slab_alloc_hook+0x120/0x270 [ 74.794095][ T2158] sp : ffff800084d33cd0 [ 74.794418][ T2158] x29: ffff800084d33cd0 x28: 0000000000000000 x27: 0000000000000000 [ 74.795095][ T2158] x26: 0000000000000000 x25: 0000000000000012 x24: ffff80007b30e314 [ 74.795822][ T2158] x23: ffff000390ff6f10 x22: 0000000000000000 x21: 0000000000000088 [ 74.796555][ T2158] x20: ffff000390285840 x19: fffffd7fc3ef7830 x18: ffffffffffffffff [ 74.797283][ T2158] x17: ffff8000800e63b4 x16: ffff80007b33afc4 x15: ffff800081654c00 [ 74.798011][ T2158] x14: 0000000000000000 x13: 205d383531325420 x12: 5b5d383734363537 [ 74.798744][ T2158] x11: ffff800084d337e0 x10: 000000000000005d x9 : 00000000ffffffd0 [ 74.799476][ T2158] x8 : 7f7f7f7f7f7f7f7f x7 : ffff80008219d188 x6 : c0000000ffff7fff [ 74.800206][ T2158] x5 : ffff0003fdbc9208 x4 : ffff800081edd188 x3 : 0000000000000001 [ 74.800932][ T2158] x2 : 0beaa6dee1ac5a00 x1 : 0beaa6dee1ac5a00 x0 : ffff80037c2cb000 [ 74.801656][ T2158] Call trace: [ 74.801954][ T2158] alloc_tagging_slab_alloc_hook+0x120/0x270 [ 74.802494][ T2158] __kmalloc_cache_noprof+0x148/0x33c [ 74.802976][ T2158] test_kmalloc_redzone_access+0x4c/0x104 [slub_kunit] [ 74.803607][ T2158] kunit_try_run_case+0x70/0x17c [kunit] [ 74.804124][ T2158] kunit_generic_run_threadfn_adapter+0x2c/0x4c [kunit] [ 74.804768][ T2158] kthread+0x10c/0x118 [ 74.805141][ T2158] ret_from_fork+0x10/0x20 [ 74.805540][ T2158] Code: b9400a80 11000400 b9000a80 97ffd858 (f94012d3) [ 74.806176][ T2158] SMP: stopping secondary CPUs [ 74.808130][ T2158] Starting crashdump kernel... Link: https://lkml.kernel.org/r/[email protected] Fixes: e0a955b ("mm/codetag: add pgalloc_tag_copy()") Signed-off-by: Hao Ge <[email protected]> Acked-by: Suren Baghdasaryan <[email protected]> Suggested-by: Suren Baghdasaryan <[email protected]> Acked-by: Yu Zhao <[email protected]> Cc: Kent Overstreet <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
The scope of the TX skb is wider than just mse102x_tx_frame_spi(), so in case the TX skb room needs to be expanded, we should free the the temporary skb instead of the original skb. Otherwise the original TX skb pointer would be freed again in mse102x_tx_work(), which leads to crashes: Internal error: Oops: 0000000096000004 [#2] PREEMPT SMP CPU: 0 PID: 712 Comm: kworker/0:1 Tainted: G D 6.6.23 Hardware name: chargebyte Charge SOM DC-ONE (DT) Workqueue: events mse102x_tx_work [mse102x] pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : skb_release_data+0xb8/0x1d8 lr : skb_release_data+0x1ac/0x1d8 sp : ffff8000819a3cc0 x29: ffff8000819a3cc0 x28: ffff0000046daa60 x27: ffff0000057f2dc0 x26: ffff000005386c00 x25: 0000000000000002 x24: 00000000ffffffff x23: 0000000000000000 x22: 0000000000000001 x21: ffff0000057f2e50 x20: 0000000000000006 x19: 0000000000000000 x18: ffff00003fdacfcc x17: e69ad452d0c49def x16: 84a005feff870102 x15: 0000000000000000 x14: 000000000000024a x13: 0000000000000002 x12: 0000000000000000 x11: 0000000000000400 x10: 0000000000000930 x9 : ffff00003fd913e8 x8 : fffffc00001bc008 x7 : 0000000000000000 x6 : 0000000000000008 x5 : ffff00003fd91340 x4 : 0000000000000000 x3 : 0000000000000009 x2 : 00000000fffffffe x1 : 0000000000000000 x0 : 0000000000000000 Call trace: skb_release_data+0xb8/0x1d8 kfree_skb_reason+0x48/0xb0 mse102x_tx_work+0x164/0x35c [mse102x] process_one_work+0x138/0x260 worker_thread+0x32c/0x438 kthread+0x118/0x11c ret_from_fork+0x10/0x20 Code: aa1303e0 97fffab6 72001c1f 54000141 (f9400660) Cc: [email protected] Fixes: 2f207cb ("net: vertexcom: Add MSE102x SPI support") Signed-off-by: Stefan Wahren <[email protected]> Link: https://patch.msgid.link/[email protected] Signed-off-by: Jakub Kicinski <[email protected]>
Demotion reloaded, without migration