Skip to content

Commit

Permalink
Fix cpu hotplug atomic sleep issue
Browse files Browse the repository at this point in the history
We move the spinlock unlock before the thread creation. This should be
safe because the thread creation code doesn't actually manipulate any
taskq data structures; that's done by the thread once it's created.

We also remove the assertion that the maxthreads is the current threads
plus one; that assertion could fail if multiple hotplug events come in
quick succession, and the first new taskq thread hasn't had a chance to
start processing yet.

Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Matthew Ahrens <[email protected]>
eviewed-by: Tony Nguyen <[email protected]>
Signed-off-by: Paul Dagnelie <[email protected]>
Closes openzfs#12714
  • Loading branch information
pcd1193182 authored and tonyhutter committed Feb 16, 2022
1 parent 5c80a25 commit 7bd292e
Showing 1 changed file with 6 additions and 5 deletions.
11 changes: 6 additions & 5 deletions module/os/linux/spl/spl-taskq.c
Original file line number Diff line number Diff line change
Expand Up @@ -1298,8 +1298,10 @@ spl_taskq_expand(unsigned int cpu, struct hlist_node *node)
ASSERT(tq);
spin_lock_irqsave_nested(&tq->tq_lock, flags, tq->tq_lock_class);

if (!(tq->tq_flags & TASKQ_ACTIVE))
goto out;
if (!(tq->tq_flags & TASKQ_ACTIVE)) {
spin_unlock_irqrestore(&tq->tq_lock, flags);
return (err);
}

ASSERT(tq->tq_flags & TASKQ_THREADS_CPU_PCT);
int nthreads = MIN(tq->tq_cpu_pct, 100);
Expand All @@ -1308,13 +1310,12 @@ spl_taskq_expand(unsigned int cpu, struct hlist_node *node)

if (!((tq->tq_flags & TASKQ_DYNAMIC) && spl_taskq_thread_dynamic) &&
tq->tq_maxthreads > tq->tq_nthreads) {
ASSERT3U(tq->tq_maxthreads, ==, tq->tq_nthreads + 1);
spin_unlock_irqrestore(&tq->tq_lock, flags);
taskq_thread_t *tqt = taskq_thread_create(tq);
if (tqt == NULL)
err = -1;
return (err);
}

out:
spin_unlock_irqrestore(&tq->tq_lock, flags);
return (err);
}
Expand Down

0 comments on commit 7bd292e

Please sign in to comment.