Skip to content

Commit

Permalink
DLPX-86682 DOSE Migration: Evacuate data blocks based on their block …
Browse files Browse the repository at this point in the history
…boundaries (openzfs#1021)

= Problem

Current data evacuation design of migration has the problem of breaking
up all segments in the indirect mapping to multiple 512-byte segments in
the destination vdev which is the object store. This causes frees and
reads to those blocks to be split up into multiple I/Os affecting our
CPU usage and I/O throughput. Moreover, ingesting all these 512-blocks
to the zettacache induces unecessary overhead to some of its subsystems
like the SlabAllocator and Index Merging.

A side-issue that's also fixed in this PR is the sync write semantics
for hybrid pools (object store vdev + normal vdevs). Currently our VMs
drop ZIL writes, when they could just allow them to be satisfied by
normal class vdevs.

= This Patch

Initiates a pool-wide scan that records the block boundaries of all
the blocks that belong to the device that we want to remove. Then
these block boundaries are used to issue ZIOs with the exact block
size to the object store, avoiding this way the 512-byte split issue,
and resulting to an object-store vdev layout that's similar to a pure
object-based pool.

The block boundaries are kept in-memory on a B-Tree, and they are
persisted in a spacemap. The B-Tree is later used during the creation
of the indirect mappings to issue ZIOs of the right block size.

A new on-disk feature flag is created for hybrid pools (which are the
first step of migration). A feature flag for the agent is also
introduced since we changed some protocol semantics for zero-length
writes (see code for more details and the note below).

A side-change here is that we enable ZIL writes to normal class vdevs
in hybrid pools.

Another side change here is the introduction of
`zpool wait -i bb_scan` which basically waits for the pool-wide scan
that precedes removal. This was implemented for the purposes of
testing the feature in the ZTS. Running zpool wait -i removal waits
for both the scrub and the actual removal.

= Testing
* New tests have been added in the test suite to test this feature
* We now have green zoa_kill stress tests from QA

= Misc Details About Code & Future Work

zero-length writes: To maintain our offset-to-blockid translations
in our indirect mapping for all allocated blocks we submit a write
to object store with the contents of the block that we are copying
specifying the same size as the block and then submit zero-length
writes for every block ID that is covered by that segment. For
example we copy a block that's 2KB ot the object store that
translated to BlockID X, we submit the 2KB write with the contents
to BlockID X and then we submit 3 zero-length writes to X+1, X+2,
and X+3. These zero-length writes are something that we had to
explicitly add support for in the object-agent - specifically
allowing DataObjects with no blocks to be flushed to the object
store.

These zero-length writes can still induce overheads in CPU and
bandwidth in the Kernel-To-Agent communication hurting our removal
performance. We could further optimize those in future releases.
Bug: https://delphix.atlassian.net/browse/DLPX-85983

memory limit for removal: The B-Tree used for the pool-wide scan
and exists until the end of the removal can be quite expensive in
terms of RAM. It can lead to a very bad scenario if we were to
start a migration/removal that ends up running the system out of
memory. For this reason we have a memory limit check before
starting such an operation. Unfortunately, we can't tell exactly
how much RAM this tree could get because its size depends on the
sizes of the blocks of the removing device and we don't have any
useful block statistics in ZFS currently. Thus we make some
assumptions that are implemented in the code as tunables.

@sumedhbala-delphix helped me with this by creating an Excel
spreadsheet listing all the devices that we have from phonehome
data and the memory of the systems that they belong to (reference:
https://docs.google.com/spreadsheets/d/
1VMWRtNdQZ2EWoxLdI5Gpyrpoz_NZiYDzgciLIebrHSw/edit?usp=sharing).
Almost 1% of our customers does not have enough memory for such
an operation and the majority of that 1% is under the recommended
memory size that we have for customers (64GB). With that in mind,
our goal is to use the heuristic memory limit check that we have
now for 14.0 and improve our memory consumption and memory checks
in future releases.
(see https://delphix.atlassian.net/browse/DLPX-87127).

zettacache ingestion: Currently all data send to the object store
from the removal operation is ingested in the zettacache as normal
writes. This could potentially disrupt the cached read performance
of some of our customers depending on their workload. Even though
there isn't a silver bullet for this problem it could be helpful
in the future to introduce some kind of tunable or filtering for
those writes: https://delphix.atlassian.net/browse/DLPX-85502

block boundary persistence: We currently use a spacemap which has
space overheads (i.e. fields that we don't use + debug entries).
It would be nice to have an on-disk structure that's basically a
Vec: https://delphix.atlassian.net/browse/DLPX-87128
  • Loading branch information
sdimitro authored Aug 18, 2023
1 parent a27975b commit c344496
Show file tree
Hide file tree
Showing 48 changed files with 2,297 additions and 490 deletions.
7 changes: 5 additions & 2 deletions cmd/zdb/zdb.c
Original file line number Diff line number Diff line change
Expand Up @@ -5962,8 +5962,11 @@ claim_segment_impl_cb(uint64_t inner_offset, vdev_t *vd, uint64_t offset,
*/
ASSERT(vdev_is_concrete(vd));

VERIFY0(metaslab_claim_impl(vd, offset, size,
spa_min_claim_txg(vd->vdev_spa)));
metaslab_visit_op_t mvo = {
.mvo_func = metaslab_claim_concrete,
.mvo_txg = spa_min_claim_txg(vd->vdev_spa),
};
VERIFY0(metaslab_visit(vd, offset, size, &mvo));
}

static void
Expand Down
17 changes: 16 additions & 1 deletion cmd/zfs_object_agent/util/src/vec_ext.rs
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,12 @@ impl From<AlignedBytes> for Bytes {

impl From<AlignedVec> for AlignedBytes {
fn from(mut aligned_vec: AlignedVec) -> Self {
if aligned_vec.is_empty() {
return Self {
alignment: aligned_vec.alignment,
bytes: Bytes::new(),
};
}
aligned_vec.verify();
let ptr = aligned_vec.vec.as_ptr();
// resize so that there is no spare capacity, so that converting to Bytes will not
Expand Down Expand Up @@ -127,6 +133,13 @@ pub struct AlignedVec {
impl AlignedVec {
pub fn with_capacity(capacity: usize, alignment: usize) -> Self {
assert_ne!(alignment, 0);
if capacity == 0 {
return Self {
alignment,
vec: Vec::new(),
pad: 0,
};
}
let mut vec: Vec<u8> = Vec::with_capacity(capacity + alignment);
let pad = vec.as_ptr().align_offset(alignment);
assert_lt!(pad, alignment);
Expand All @@ -141,7 +154,9 @@ impl AlignedVec {
}

fn verify(&self) {
assert_eq!(self.as_ptr().align_offset(self.alignment), 0);
if self.vec.capacity() != 0 {
assert_eq!(self.as_ptr().align_offset(self.alignment), 0);
}
}

pub fn extend_from_value(&mut self, len: usize, value: u8) {
Expand Down
12 changes: 10 additions & 2 deletions cmd/zfs_object_agent/zettaobject/src/data_object.rs
Original file line number Diff line number Diff line change
Expand Up @@ -565,8 +565,16 @@ impl DataObject {
u32::try_from(self.blocks.len()).unwrap()
}

pub fn is_empty(&self) -> bool {
self.blocks.is_empty()
/// A DataObject is considered empty if the range of block IDs that it contains is empty.
/// Note that a non-empty object may contain zero blocks, e.g. if they blocks have been freed
/// and reclaimed, or if the blocks were created by zero-length writes.
pub fn covers_any_blocks(&self) -> bool {
if self.header.object.as_min_block() == self.header.next_block {
assert!(self.blocks.is_empty());
true
} else {
false
}
}

/// This assumes that the objects all have ObjectVersion=None
Expand Down
11 changes: 6 additions & 5 deletions cmd/zfs_object_agent/zettaobject/src/pool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -671,7 +671,7 @@ impl PoolSyncingState {
assert!(self.pending_unordered_writes.is_empty());
{
let (phys, senders) = self.pending_object.as_mut_pending();
assert!(phys.is_empty());
assert!(phys.covers_any_blocks());
assert!(senders.is_empty());

self.pending_object = PendingObjectState::NotPending(phys.header.next_block);
Expand Down Expand Up @@ -1339,11 +1339,10 @@ impl Pool {

let (object, next_block) = {
let (phys, _) = syncing_state.pending_object.as_mut_pending();
if phys.is_empty() {
if phys.covers_any_blocks() {
return;
} else {
(phys.header.object, phys.header.next_block)
}
(phys.header.object, phys.header.next_block)
};

let (phys, callbacks) = mem::replace(
Expand Down Expand Up @@ -1407,7 +1406,9 @@ impl Pool {
assert_eq!(block, next_block);
let (phys, callbacks) = syncing_state.pending_object.as_mut_pending();
phys.header.blocks_size += u32::try_from(buf.len()).unwrap();
phys.blocks.insert(phys.header.next_block, buf);
if !buf.is_empty() {
phys.blocks.insert(phys.header.next_block, buf);
}
next_block = next_block.next();
phys.header.next_block = next_block;
callbacks.push(callback);
Expand Down
7 changes: 5 additions & 2 deletions cmd/zpool/zpool_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -11248,7 +11248,7 @@ print_wait_status_row(wait_data_t *wd, zpool_handle_t *zhp, int row)
pool_scan_stat_t *pss = NULL;
pool_removal_stat_t *prs = NULL;
const char *const headers[] = {"DISCARD", "FREE", "INITIALIZE",
"REPLACE", "REMOVE", "RESILVER", "SCRUB", "TRIM"};
"REPLACE", "REMOVE", "RESILVER", "SCRUB", "TRIM", "BB_SCAN"};
int col_widths[ZPOOL_WAIT_NUM_ACTIVITIES];

/* Calculate the width of each column */
Expand Down Expand Up @@ -11300,6 +11300,8 @@ print_wait_status_row(wait_data_t *wd, zpool_handle_t *zhp, int row)
int64_t rem = pss->pss_to_examine - pss->pss_issued;
if (pss->pss_func == POOL_SCAN_SCRUB)
bytes_rem[ZPOOL_WAIT_SCRUB] = rem;
else if (pss->pss_func == POOL_SCAN_BLOCK_BOUNDARIES)
bytes_rem[ZPOOL_WAIT_BLOCK_BOUNDARY_SCAN] = rem;
else
bytes_rem[ZPOOL_WAIT_RESILVER] = rem;
} else if (check_rebuilding(nvroot, NULL)) {
Expand Down Expand Up @@ -11446,7 +11448,8 @@ zpool_do_wait(int argc, char **argv)
for (char *tok; (tok = strsep(&optarg, ",")); ) {
static const char *const col_opts[] = {
"discard", "free", "initialize", "replace",
"remove", "resilver", "scrub", "trim" };
"remove", "resilver", "scrub", "trim",
"bb_scan" };

for (i = 0; i < ARRAY_SIZE(col_opts); ++i)
if (strcmp(tok, col_opts[i]) == 0) {
Expand Down
1 change: 1 addition & 0 deletions include/libzfs.h
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,7 @@ typedef enum zfs_error {
EZFS_RESUME_EXISTS, /* Resume on existing dataset without force */
EZFS_OBJSTORE_EXISTS, /* pool is already backed by an object store */
EZFS_OBJSTORE_ADD_ALONE, /* can't add extra vdevs with objstore */
EZFS_BB_SCAN_MEMLIMIT, /* not enough memory for removal to objstore */
EZFS_UNKNOWN
} zfs_error_t;

Expand Down
1 change: 1 addition & 0 deletions include/sys/dmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -392,6 +392,7 @@ typedef struct dmu_buf {
#define DMU_POOL_DELETED_CLONES "com.delphix:deleted_clones"
#define DMU_POOL_BOOKMARK_V2_RECALCULATED \
"com.delphix:bookmark_v2_recalculated"
#define DMU_POOL_BLOCK_BOUNDARY_SM "com.delphix:block_boundary_spacemap"

/*
* Allocate an object from this objset. The range of object numbers
Expand Down
2 changes: 2 additions & 0 deletions include/sys/dsl_scan.h
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,7 @@ void dsl_scan_setup_sync(void *, dmu_tx_t *);
void dsl_scan_fini(struct dsl_pool *dp);
void dsl_scan_sync(struct dsl_pool *, dmu_tx_t *);
int dsl_scan_cancel(struct dsl_pool *);
void dsl_scan_cancel_sync(void *, dmu_tx_t *);
int dsl_scan(struct dsl_pool *, pool_scan_func_t);
void dsl_scan_assess_vdev(struct dsl_pool *dp, vdev_t *vd);
boolean_t dsl_scan_scrubbing(const struct dsl_pool *dp);
Expand All @@ -198,6 +199,7 @@ int dsl_scrub_set_pause_resume(const struct dsl_pool *dp,
pool_scrub_cmd_t cmd);
void dsl_errorscrub_sync(struct dsl_pool *, dmu_tx_t *);
boolean_t dsl_scan_resilvering(struct dsl_pool *dp);
boolean_t dsl_scan_recording_block_boundaries(struct dsl_pool *dp);
boolean_t dsl_scan_resilver_scheduled(struct dsl_pool *dp);
boolean_t dsl_dataset_unstable(struct dsl_dataset *ds);
void dsl_scan_ddt_entry(dsl_scan_t *scn, enum zio_checksum checksum,
Expand Down
4 changes: 4 additions & 0 deletions include/sys/fs/zfs.h
Original file line number Diff line number Diff line change
Expand Up @@ -1068,6 +1068,7 @@ typedef enum pool_scan_func {
POOL_SCAN_SCRUB,
POOL_SCAN_RESILVER,
POOL_SCAN_ERRORSCRUB,
POOL_SCAN_BLOCK_BOUNDARIES,
POOL_SCAN_FUNCS
} pool_scan_func_t;

Expand Down Expand Up @@ -1104,6 +1105,7 @@ typedef enum zio_type {
ZIO_TYPE_CLAIM,
ZIO_TYPE_IOCTL,
ZIO_TYPE_TRIM,
ZIO_TYPE_RECORD_BLOCK,
ZIO_TYPES
} zio_type_t;

Expand Down Expand Up @@ -1603,6 +1605,7 @@ typedef enum {
ZFS_ERR_CRYPTO_NOTSUP,
ZFS_ERR_OBJSTORE_EXISTS,
ZFS_ERR_OBJSTORE_ADD_ALONE,
ZFS_ERR_BB_SCAN_MEMLIMIT,
} zfs_errno_t;

/*
Expand Down Expand Up @@ -1644,6 +1647,7 @@ typedef enum {
ZPOOL_WAIT_RESILVER,
ZPOOL_WAIT_SCRUB,
ZPOOL_WAIT_TRIM,
ZPOOL_WAIT_BLOCK_BOUNDARY_SCAN,
ZPOOL_WAIT_NUM_ACTIVITIES
} zpool_wait_activity_t;

Expand Down
12 changes: 11 additions & 1 deletion include/sys/metaslab.h
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,14 @@ typedef struct metaslab_ops {
} metaslab_ops_t;


typedef int mvo_func_t(vdev_t *vd,
uint64_t offset, uint64_t size, uint64_t txg);

typedef struct metaslab_visit_op {
mvo_func_t *mvo_func;
uint64_t mvo_txg;
} metaslab_visit_op_t;

extern const metaslab_ops_t zfs_metaslab_ops;
extern const metaslab_ops_t zfs_objectstore_ops;

Expand Down Expand Up @@ -95,8 +103,10 @@ void metaslab_free_concrete(vdev_t *, uint64_t, uint64_t, boolean_t);
void metaslab_free_dva(spa_t *, const dva_t *, boolean_t);
void metaslab_free_impl_cb(uint64_t, vdev_t *, uint64_t, uint64_t, void *);
void metaslab_unalloc_dva(spa_t *, const dva_t *, uint64_t);
int metaslab_record(spa_t *, const blkptr_t *, uint64_t);
int metaslab_claim(spa_t *, const blkptr_t *, uint64_t);
int metaslab_claim_impl(vdev_t *, uint64_t, uint64_t, uint64_t);
int metaslab_visit(vdev_t *, uint64_t, uint64_t, metaslab_visit_op_t *);
int metaslab_claim_concrete(vdev_t *, uint64_t, uint64_t, uint64_t);
void metaslab_check_free(spa_t *, const blkptr_t *);

void metaslab_stat_init(void);
Expand Down
1 change: 1 addition & 0 deletions include/sys/spa.h
Original file line number Diff line number Diff line change
Expand Up @@ -1089,6 +1089,7 @@ extern uint64_t dva_get_dsize_sync(spa_t *spa, const dva_t *dva);
extern uint64_t bp_get_dsize_sync(spa_t *spa, const blkptr_t *bp);
extern uint64_t bp_get_dsize(spa_t *spa, const blkptr_t *bp);
extern boolean_t spa_has_slogs(spa_t *spa);
extern boolean_t spa_has_normal_vdevs(spa_t *spa);
extern boolean_t spa_is_root(spa_t *spa);
extern boolean_t spa_writeable(spa_t *spa);
extern boolean_t spa_has_pending_synctask(spa_t *spa);
Expand Down
2 changes: 2 additions & 0 deletions include/sys/space_map.h
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,8 @@ uint64_t space_map_nblocks(space_map_t *sm);

void space_map_write(space_map_t *sm, range_tree_t *rt, maptype_t maptype,
uint64_t vdev_id, dmu_tx_t *tx);
void space_map_write_btree(space_map_t *sm,
zfs_btree_t *t, dmu_tx_t *tx);
uint64_t space_map_estimate_optimal_size(space_map_t *sm, range_tree_t *rt,
uint64_t vdev_id);
void space_map_truncate(space_map_t *sm, int blocksize, dmu_tx_t *tx);
Expand Down
32 changes: 32 additions & 0 deletions include/sys/vdev_removal.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,28 @@
extern "C" {
#endif

typedef struct block_boundaries {
/*
* Synchronization primitives used for coordinating the
* open-context removal thread with the scanning logic.
*
* The mutex also protects concurrent accesses to the
* bb_scanning_tree between ZIOs.
*/
kcondvar_t bb_cv;
kmutex_t bb_lock;

/* Contains all the block boundaries scanned this TXG */
zfs_btree_t bb_scanning;

/* Contains all the block boundaries scanned so far */
zfs_btree_t bb_tree;

/* Persistence of bb_tree between imports/exports */
space_map_t *bb_sm;

} block_boundaries_t;

typedef struct vdev_copy_arg {
/* Current metaslab that removal is evacuating data from. */
metaslab_t *vca_msp;
Expand Down Expand Up @@ -83,6 +105,13 @@ typedef struct spa_vdev_removal {

/* Data tracking on-going removal. */
vdev_copy_arg_t svr_vca;

/*
* Data used for coordinating block boundary scans with removal.
* This field is only used by removals evacuating data to object
* store vdevs.
*/
block_boundaries_t svr_bb;
} spa_vdev_removal_t;

typedef struct spa_condensing_indirect {
Expand All @@ -108,6 +137,9 @@ extern void spa_vdev_remove_suspend(spa_t *);
extern int spa_vdev_remove_cancel(spa_t *);
extern void spa_vdev_removal_destroy(spa_vdev_removal_t *);
extern uint64_t spa_remove_max_segment(spa_t *);
extern boolean_t spa_vdev_removal_active(spa_t *);
extern boolean_t spa_vdev_is_evacuating_to_object_store(spa_t *);
extern boolean_t spa_vdev_removal_block_boundaries_scan_is_done(spa_t *);

extern uint_t vdev_removal_max_span;

Expand Down
3 changes: 3 additions & 0 deletions include/sys/zio.h
Original file line number Diff line number Diff line change
Expand Up @@ -586,6 +586,9 @@ extern zio_t *zio_claim(zio_t *pio, spa_t *spa, uint64_t txg,
const blkptr_t *bp,
zio_done_func_t *done, void *priv, zio_flag_t flags);

extern zio_t *zio_record_boundary(zio_t *pio, spa_t *spa, uint64_t txg,
const blkptr_t *bp, zio_flag_t flags);

extern zio_t *zio_ioctl(zio_t *pio, spa_t *spa, vdev_t *vd, int cmd,
zio_done_func_t *done, void *priv, zio_flag_t flags);

Expand Down
31 changes: 20 additions & 11 deletions include/sys/zio_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,23 @@ extern "C" {
*
* The ZFS I/O pipeline is comprised of various stages which are defined
* in the zio_stage enum below. The individual stages are used to construct
* these basic I/O operations: Read, Write, Free, Claim, and Ioctl.
* these basic I/O operations: Read, Write, Free, Visit, and Ioctl.
*
* I/O operations: (XXX - provide detail for each of the operations)
*
* Read:
* Write:
* Free:
* Claim:
* Visit:
* This operation visits the DVAs of a particular block pointer
* resolving any layers of indirections on the way - specifically
* gang blocks or blocks have been evacuated during device removal
* and are part of its indirect mapping. Depending on the ZIO type
* the operation applies different functions to the visited block.
* zio_claim() for example visits all the uncommited blocks from
* the ZIL during pool import to notify the SPA that they are in
* use. Similarly, zio_record_boundary() visits a block to record
* its boundaries.
* Ioctl:
*
* Although the most common pipeline are used by the basic I/O operations
Expand Down Expand Up @@ -120,7 +129,7 @@ extern "C" {
* zio pipeline stage definitions
*/
enum zio_stage {
ZIO_STAGE_OPEN = 1 << 0, /* RWFCI */
ZIO_STAGE_OPEN = 1 << 0, /* RWFVI */

ZIO_STAGE_READ_BP_INIT = 1 << 1, /* R---- */
ZIO_STAGE_WRITE_BP_INIT = 1 << 2, /* -W--- */
Expand All @@ -140,23 +149,23 @@ enum zio_stage {
ZIO_STAGE_DDT_WRITE = 1 << 12, /* -W--- */
ZIO_STAGE_DDT_FREE = 1 << 13, /* --F-- */

ZIO_STAGE_GANG_ASSEMBLE = 1 << 14, /* RWFC- */
ZIO_STAGE_GANG_ISSUE = 1 << 15, /* RWFC- */
ZIO_STAGE_GANG_ASSEMBLE = 1 << 14, /* RWFV- */
ZIO_STAGE_GANG_ISSUE = 1 << 15, /* RWFV- */

ZIO_STAGE_DVA_THROTTLE = 1 << 16, /* -W--- */
ZIO_STAGE_DVA_ALLOCATE = 1 << 17, /* -W--- */
ZIO_STAGE_DVA_FREE = 1 << 18, /* --F-- */
ZIO_STAGE_DVA_CLAIM = 1 << 19, /* ---C- */
ZIO_STAGE_DVA_VISIT = 1 << 19, /* ---V- */

ZIO_STAGE_READY = 1 << 20, /* RWFCI */
ZIO_STAGE_READY = 1 << 20, /* RWFVI */

ZIO_STAGE_VDEV_IO_START = 1 << 21, /* RW--I */
ZIO_STAGE_VDEV_IO_DONE = 1 << 22, /* RW--I */
ZIO_STAGE_VDEV_IO_ASSESS = 1 << 23, /* RW--I */

ZIO_STAGE_CHECKSUM_VERIFY = 1 << 24, /* R---- */

ZIO_STAGE_DONE = 1 << 25 /* RWFCI */
ZIO_STAGE_DONE = 1 << 25 /* RWFVI */
};

#define ZIO_INTERLOCK_STAGES \
Expand Down Expand Up @@ -250,9 +259,9 @@ enum zio_stage {
ZIO_STAGE_ISSUE_ASYNC | \
ZIO_STAGE_DDT_FREE)

#define ZIO_CLAIM_PIPELINE \
#define ZIO_VISIT_PIPELINE \
(ZIO_INTERLOCK_STAGES | \
ZIO_STAGE_DVA_CLAIM)
ZIO_STAGE_DVA_VISIT)

#define ZIO_IOCTL_PIPELINE \
(ZIO_INTERLOCK_STAGES | \
Expand All @@ -266,7 +275,7 @@ enum zio_stage {

#define ZIO_BLOCKING_STAGES \
(ZIO_STAGE_DVA_ALLOCATE | \
ZIO_STAGE_DVA_CLAIM | \
ZIO_STAGE_DVA_VISIT | \
ZIO_STAGE_VDEV_IO_START)

extern void zio_inject_init(void);
Expand Down
1 change: 1 addition & 0 deletions include/zfeature_common.h
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ typedef enum spa_feature {
SPA_FEATURE_VERSIONED_OBJECTS,
SPA_FEATURE_BLOCK_CLONING,
SPA_FEATURE_AVZ_V2,
SPA_FEATURE_HYBRID_POOLS,
SPA_FEATURES
} spa_feature_t;

Expand Down
Loading

0 comments on commit c344496

Please sign in to comment.