Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: segmentation fault of write throttle while import #4652

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions module/zfs/dsl_pool.c
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -182,12 +182,19 @@ dsl_pool_init(spa_t *spa, uint64_t txg, dsl_pool_t **dpp)
int err;
dsl_pool_t *dp = dsl_pool_open_impl(spa, txg);

/*
* initialize the caller's dsl_pool_t structure before we actally open
* the meta objset, in case the self-healing write zio which may be
* issued by the open process want to access it before we get ready.
*/
*dpp = dp;

err = dmu_objset_open_impl(spa, NULL, &dp->dp_meta_rootbp,
&dp->dp_meta_objset);
if (err != 0)
if (err != 0) {
dsl_pool_close(dp);
else
*dpp = dp;
*dpp = NULL;
}

return (err);
}
Expand Down
17 changes: 14 additions & 3 deletions module/zfs/vdev_queue.c
Original file line number Diff line number Diff line change
Expand Up @@ -249,20 +249,31 @@ static int
vdev_queue_max_async_writes(spa_t *spa)
{
int writes;
uint64_t dirty = spa->spa_dsl_pool->dp_dirty_total;
uint64_t dirty = 0;
dsl_pool_t *dp = spa_get_dsl(spa);
uint64_t min_bytes = zfs_dirty_data_max *
zfs_vdev_async_write_active_min_dirty_percent / 100;
uint64_t max_bytes = zfs_dirty_data_max *
zfs_vdev_async_write_active_max_dirty_percent / 100;

/*
* async writes may be issued before the pool finish the process of
* initialization, which means we can't get the statistics of dirty
* data from the spa. typically this happens when the self-healing
* zio was issued by mirror while importing. we push data out as fast
* as possible to speed up the initialization.
*/
if (dp == NULL)
return (zfs_vdev_async_write_max_active);

/*
* Sync tasks correspond to interactive user actions. To reduce the
* execution time of those actions we push data out as fast as possible.
*/
if (spa_has_pending_synctask(spa)) {
if (spa_has_pending_synctask(spa))
return (zfs_vdev_async_write_max_active);
}

dirty = dp->dp_dirty_total;
if (dirty < min_bytes)
return (zfs_vdev_async_write_min_active);
if (dirty > max_bytes)
Expand Down