LIP: 0063
Title: Define mainnet configuration and migration for Lisk Core v4
Author: Andreas Kendziorra <[email protected]>
Sergey Shemyakov <[email protected]>
Discussions-To: https://research.lisk.com/t/define-mainnet-configuration-and-migration-for-lisk-core-4/340
Status: Active
Type: Standards Track
Created: 2022-04-06
Updated: 2024-01-05
Requires: 0060
This proposal defines the configuration of Lisk Core v4, including the selection of modules and the choice of some configurable constants. Moreover, it defines the migration process from Lisk Core v3 to Lisk Core v4. As for the previous hard fork, a snapshot block is used that contains a snapshot of the state. This snapshot block is then treated like a genesis block by Lisk Core v4 nodes. In contrast to the previous hard fork, the existing block and transaction history is not discarded.
This LIP is licensed under the Creative Commons Zero 1.0 Universal.
This LIP is motivated by two points. The first is that the Lisk SDK v6 introduces several new configurable settings. This includes the set of existing modules that can or even must be registered. But also some new constants that must be specified for each chain. Thus, these configurations must be specified for Lisk Core v4 as well which shall be done within this LIP.
The second point is the migration from Lisk Core v3 to Lisk Core v4, for which a process must be defined. This shall also be done within this LIP.
The following table defines some constants that will be used in the remainder of this document.
Name | Type | Value | Description |
DUMMY_PROOF_OF_POSSESSION
|
bytes | byte string of length 96, each byte set to zero | A dummy value for a proof of possession. The PoS module requires that each validator account contained in a snapshot/genesis block has such an entry. However, the proofs of possession are not verified for this particular snapshot block.
This value will be used for every validator account in the snapshot block. |
EMPTY_BYTES
|
bytes | empty byte string | Empty array of bytes. |
CHAIN_ID_MAINCHAIN
|
bytes | 0x 00 00 00 00 (in Lisk Mainnet)
|
The chain ID of the mainchain of Lisk, see LIP 0037. |
TOKEN_ID_LSK
|
bytes | 0x 00 00 00 00 00 00 00 00
|
Token ID of the LSK token for Mainnet, see LIP 0051. |
MODULE_NAME_POS
|
string | "pos" | Module name of the PoS Module. |
MODULE_NAME_AUTH
|
string | "auth" | Module name of the Auth Module. |
MODULE_NAME_TOKEN
|
string | "token" | Module name of the Token Module. |
MODULE_NAME_LEGACY
|
string | "legacy" | Module name of the Legacy Module. |
MODULE_NAME_INTEROPERABILITY
|
string | "interoperability" | Module name of the Interoperability Module. |
CHAIN_NAME_MAINCHAIN
|
string | "lisk_mainchain" | Name of the Lisk mainchain, as defined in the Interoperability Module. |
INVALID_ED25519_KEY
|
bytes |
|
An Ed25519 public key for which the signature validation always fails. This value is used for the generatorPublicKey property of validators in the snapshot block for which no public key is present within the history since the last snapshot block.
|
INVALID_BLS_KEY
|
bytes |
48 bytes all set to 0 |
A BLS key, used as a placeholder before a valid BLS key is registered. It is invalid since the most significant bit of the first byte is zero while the total length is 48 and deserialization fails (see second point here). |
POS_INIT_ROUNDS
|
uint32 | 587 = 7*24*3600 // (BLOCK_TIME * (NUMBER_ACTIVE_VALIDATORS + NUMBER_STANDBY_VALIDATORS))
|
The number of rounds for the bootstrap period following the snapshot block. This number corresponds to one week assuming no missed blocks. |
HEIGHT_SNAPSHOT
|
uint32 | 23,390,991 | The height of the block from which a state snapshot is taken. This block must be an end of a round block. The snapshot block then has the height HEIGHT_SNAPSHOT +1 .
|
HEIGHT_PREVIOUS_SNAPSHOT_BLOCK
|
uint32 | 16,270,293 | The height of the snapshot block used for the migration from Lisk Core v2 to Lisk Core v3. |
SNAPSHOT_TIME_GAP
|
uint32 | 3,600 | The number of seconds elapsed between the block at height HEIGHT_SNAPSHOT and the snapshot block.
|
SNAPSHOT_BLOCK_VERSION
|
uint32 | 0 | The block version of the snapshot block. |
ADDRESS_LEGACY_RESERVE
|
bytes | SHA-256(b'legacyReserve')[:20]
|
The address used to store all tokens of legacy accounts. |
Q96_ZERO
|
bytes | ''
|
The empty byte string, which represents the zero in Q96 representation. |
Note that if the migration is tested for a different network, e.g., Betanet or Testnet, the first byte of the constants CHAIN_ID_MAINCHAIN
and TOKEN_ID_LSK
need to be changed to the appropriate chain-identifier prefix, see LIP 0037 for details.
Lisk Core v4 will use the following modules, where the given order defines the module registration order:
The Interoperability modules requires to define the registration order of all interoperable modules. This is needed to define the order of modules to execute interoperability hooks in the apply and forward functions of the interoperability module. The order is as follows:
Some new LIPs introduced configurable constants. The values for these constants are specified in the following table:
Module/LIP | Constant Name | Value |
---|---|---|
LIP 0058 | LSK_BFT_BATCH_SIZE |
103 |
LIP 0055 | MAX_TRANSACTIONS_SIZE_BYTES |
15,360 |
LIP 0055 | MAX_ASSET_DATA_SIZE_BYTES |
18 (The Random Module is the only module adding an entry to the assets property whereby the size of the data property will not exceed 18 bytes) |
LIP 0037, LIP 0055 | OWN_CHAIN_ID |
CHAIN_ID_MAINCHAIN |
LIP 0061 | BLOCK_TIME |
10 seconds |
LIP 0061 | MIN_CERTIFICATE_HEIGHT |
HEIGHT_SNAPSHOT + 1 + (POS_INIT_ROUNDS + NUMBER_ACTIVE_VALIDATORS - 1) * (NUMBER_ACTIVE_VALIDATORS + NUMBER_STANDBY_VALIDATORS) = HEIGHT_SNAPSHOT + 1 + 70761 |
LIP 0068 | MAX_PARAMS_SIZE |
14 KiB (14*1024 bytes) |
LIP 0070 | COMMISSION_INCREASE_PERIOD |
241,920 |
LIP 0070 | MAX_COMMISSION_INCREASE |
500 |
PoS Module | FACTOR_SELF_STAKING |
10 |
PoS Module | BASE_STAKING_AMOUNT |
10 * (10)^8 |
PoS Module | MAX_LENGTH_NAME |
20 |
PoS Module | MAX_NUMBER_STAKING_SLOTS |
10 |
PoS Module | MAX_NUMBER_PENDING_UNLOCKS |
20 |
PoS Module | FAIL_SAFE_MISSED_BLOCKS |
50 |
PoS Module | FAIL_SAFE_INACTIVE_WINDOW |
120,960 |
PoS Module | LOCKING_PERIOD_STAKING |
25,920 |
PoS Module | LOCKING_PERIOD_SELF_STAKING |
241,920 |
PoS Module | PUNISHMENT_WINDOW_STAKING |
241,920 |
PoS Module | PUNISHMENT_WINDOW_SELF_STAKING |
725,760 |
PoS Module | REPORT_MISBEHAVIOR_REWARD |
10^8 |
PoS Module | REPORT_MISBEHAVIOR_LIMIT_BANNED |
5 |
PoS Module | MIN_WEIGHT |
1,000 x (10^8) |
PoS Module | NUMBER_ACTIVE_VALIDATORS |
101 |
PoS Module | NUMBER_STANDBY_VALIDATORS |
2 |
PoS Module | TOKEN_ID_POS |
TOKEN_ID_LSK |
PoS Module | VALIDATOR_REGISTRATION_FEE |
10 * (10^8) |
PoS Module | WEIGHT_SCALE_FACTOR |
1,000 * (10)^8 |
PoS Module | MAX_BFT_WEIGHT_CAP |
1,000 |
PoS Module | INVALID_BLS_KEYS_IN_GENESIS_BLOCK |
True |
Random Module | MAX_LENGTH_VALIDATOR_REVEALS |
206 |
Fee Module | MIN_FEE_PER_BYTE |
1,000 |
Fee Module | MAX_BLOCK_HEIGHT_ZERO_FEE_PER_BYTE |
0 |
Fee Module | TOKEN_ID_FEE |
TOKEN_ID_LSK |
Fee Module | ADDRESS_FEE_POOL |
None |
Token Module | USER_ACCOUNT_INITIALIZATION_FEE |
5,000,000 |
Token Module | ESCROW_ACCOUNT_INITIALIZATION_FEE |
5,000,000 |
Dynamic Reward Module | FACTOR_MINIMUM_REWARD_ACTIVE_VALIDATORS |
1,000 |
Dynamic Reward Module | TOKEN_ID_DYNAMIC_BLOCK_REWARD |
TOKEN_ID_LSK |
Dynamic Reward Module | REWARD_REDUCTION_FACTOR_BFT |
4 |
Figure 1: Overview of the migration process. Elements in blue are created by Lisk Core v3, including the state snapshot. The element in yellow - the snapshot block - is created by the migrator tool. Elements in green are created by Lisk Core v4.
The migration from Lisk Core v3 to Lisk Core v4, also depicted in Figure 1, is performed as follows:
- Nodes run Lisk Core v3, where the following steps are done:
- Once a block at height
HEIGHT_SNAPSHOT
is processed, a snapshot of the state is derived, which we denote bySTATE_SNAPSHOT
. If this block is reverted and a new block for this height is processed, thenSTATE_SNAPSHOT
needs to be computed again. - Nodes continue processing blocks until the block at height
HEIGHT_SNAPSHOT
is final. - Once the block at height
HEIGHT_SNAPSHOT
is final, nodes can stop forging and processing new blocks. All blocks with a height larger than or equal toHEIGHT_SNAPSHOT + 1
are discarded, even if they are finalized.
- Once a block at height
- Nodes compute a snapshot block as defined below using a migrator tool and store it locally.
- Nodes start to run Lisk Core v4, where the steps described in the section Starting Lisk Core v4 are executed.
- Once the timeslot of the snapshot block at height
HEIGHT_SNAPSHOT + 1
is passed, the first round following the new protocol starts.
When Lisk Core v4 is started for the first time, the following steps are performed:
- Get the snapshot block (the one for height
HEIGHT_SNAPSHOT + 1
):- Check if the snapshot block for height
HEIGHT_SNAPSHOT + 1
exists locally. If yes, fetch this block. If not, stop the initialization here.
- Check if the snapshot block for height
- Process the snapshot block as described in LIP 0060.
- Check if all blocks between heights
HEIGHT_PREVIOUS_SNAPSHOT_BLOCK
andHEIGHT_SNAPSHOT
(inclusive) from Lisk Core v3 can be found locally. If yes:- Fetch these blocks from highest to lowest height. Each block is validated using minimal validation steps as defined below. If this validation step passes, the block and its transactions are persisted in the database.
- Skip steps 4 and 5.
- Fetch all blocks between heights
HEIGHT_PREVIOUS_SNAPSHOT_BLOCK + 1
andHEIGHT_SNAPSHOT
(inclusive) via peer-to-peer network from highest to lowest height. Each block is validated using minimal validation steps as defined below. If this validation passes, the block along with its transactions is persisted in the database. - The snapshot block for the height
HEIGHT_PREVIOUS_SNAPSHOT_BLOCK
is downloaded from a server. The URL for the source can be configured. When downloaded, it is validated using minimal validation steps as defined below. If this validation step passes, the block is persisted in the database.
The steps 3 to 5 from above could run in the background with low priority.
Due to step 1.1, it is a requirement to run Lisk Core v3 and the migrator tool before running Lisk Core v4. However, nodes starting some time after the migration may fetch the snapshot block and its preceding blocks without running Lisk Core v3 and migrator tool, as described in the following subsection.
Once the snapshot block at height HEIGHT_SNAPSHOT + 1
is final, a new version of Lisk Core v4 that has the block ID of this snapshot block hard-coded can be released. In the following, we denote this version by Lisk Core v4+. When Lisk Core v4+ starts for the first time, the same steps as described above are executed, except that step 1 is replaced by the following:
- Get the snapshot block (the one for height
HEIGHT_SNAPSHOT + 1
):- Check if the snapshot block for height
HEIGHT_SNAPSHOT + 1
exists locally. If yes:- Fetch this block.
- Verify that the block ID of this block is matching with the hard-coded block ID of the snapshot block. If yes, skip step ii. If not, continue with step ii.
- The snapshot block for height
HEIGHT_SNAPSHOT + 1
is downloaded from a server. The URL for the source can be configured in Lisk Core v4+. Once downloaded, it first must be verified that the block ID of this block matches with the hard-coded block ID of the snapshot block. If not, stop the initialization here (the process should be repeated, but the node operator should specify a new server to download the snapshot block from).
- Check if the snapshot block for height
Note that once the snapshot block for height HEIGHT_SNAPSHOT + 1
is processed, the node should start its regular block synchronization, i.e., fetching the blocks with height larger than HEIGHT_SNAPSHOT + 1
. The steps 4 to 5 from above could run in the background with low priority.
A block created by Lisk Core v3, i.e., a block with a height between HEIGHT_PREVIOUS_SNAPSHOT
and HEIGHT_SNAPSHOT
(inclusive) is validated as follows:
- Verify that the block follows the
block schema
defined in LIP 0029. - Compute the block ID as defined in LIP 0029 and verify that it is equal to the
previousBlockID
property of the child block. - Verify that the transactions in the payload yield the transaction root provided in the block header.
If any of the steps above fails, the block is invalid.
The snapshot block b
is constructed in accordance with the definition of genesis block in LIP 0060. The details for the header
and assets
property are specified in the following subsections.
Let a
be the block at height HEIGHT_SNAPSHOT
. Then the following points define how b.header
is constructed:
b.header.version = SNAPSHOT_BLOCK_VERSION
b.header.timestamp = a.header.timestamp + SNAPSHOT_TIME_GAP
b.header.height = HEIGHT_SNAPSHOT + 1
b.header.previousBlockID = blockID(a)
- all other block header properties must be as specified in LIP 0060
From the registered modules, the following ones add an entry to b.assets
:
- Legacy module
- Token module
- Auth module
- PoS module
- Interoperability module
How these modules construct their entry for the assets
property is defined in the next subsections. The verification of their entries is defined in the Genesis Block Processing sections of the respective module LIPs. Note that once all modules add their entries to b.assets
, this array must be sorted by lexicographical order of the module
property.
In the following, let accountsState
be the key-value store of the accounts state of STATE_SNAPSHOT
computed as described above. That means, for a byte array addr
representing an address, accountsState[addr]
is the corresponding account object following the account schema defined in LIP 0030. Moreover, let accounts
be the array that contains all values of accountsState
for which the key is a 20-byte address sorted in lexicographical order of their address
property. Correspondingly, let legacyAccounts
be the array that contains all values of accountsState
for which the key is an 8-byte address sorted in lexicographical order of their address
property.
Let genesisLegacyStoreSchema
be as defined in LIP 0050. The assets
entry for the legacy module is added by the logic defined in the function addLegacyModuleEntry
in the following pseudo code:
def addLegacyModuleEntry():
legacyObj = {}
legacyObj.accounts = []
for every account in legacyAccounts:
userObj = {}
userObj.address = account.address
userObj.balance = account.token.balance
legacyObj.accounts.append(userObj)
sort legacyObj.accounts in the lexicographical order of userObj.address
data = serialization of legacyObj using genesisLegacyStoreSchema
append {"module": MODULE_NAME_LEGACY, "data": data} to b.assets
Let genesisTokenStoreSchema
be as defined in LIP 0051. The assets
entry for the token module is added by the logic defined in the function addTokenModuleEntry
in the following pseudo code:
def addTokenModuleEntry():
tokenObj = {}
tokenObj.userSubstore = createUserSubstoreArray()
tokenObj.supplySubstore = createSupplySubstoreArray()
tokenObj.escrowSubstore = []
tokenObj.supportedTokensSubstore = []
data = serialization of tokenObj using genesisTokenStoreSchema
append {"module": MODULE_NAME_TOKEN, "data": data} to b.assets
def createUserSubstoreArray():
userSubstore = []
for every account in accounts:
if account.address != ADDRESS_LEGACY_RESERVE:
userObj = {}
userObj.address = account.address
userObj.tokenID = TOKEN_ID_LSK
userObj.availableBalance = account.token.balance
userObj.lockedBalances = getLockedBalances(account)
userSubstore.append(userObj)
# Append the legacy reserve account separately.
userSubstore.append(createLegacyReserveAccount())
sort userSubstore in the lexicographical order of (userObj.address + userObj.tokenID)
return userSubstore
def createLegacyReserveAccount():
legacyReserveAccount = account in accounts with account.address == ADDRESS_LEGACY_RESERVE
isEmpty = legacyReserveAccount is empty
legacyReserve = {}
legacyReserve.address = ADDRESS_LEGACY_RESERVE
legacyReserve.tokenID = TOKEN_ID_LSK
legacyReserve.availableBalance = isEmpty ? 0 : legacyReserveAccount.token.balance
legacyReserveAmount = 0
for every account in legacyAccounts:
legacyReserveAmount += account.token.balance
lockedBalances = isEmpty ? [] : getLockedBalances(legacyReserveAccount)
legacyReserve.lockedBalances = lockedBalances.append({"module": MODULE_NAME_LEGACY, "amount": legacyReserveAmount})
sort legacyReserve.lockedBalances in the lexicographical order of lockedBalance.module
return legacyReserve
def getLockedBalances(account):
amount = 0
for vote in account.dpos.sentVotes:
amount += vote.amount
for unlockingObj in account.dpos.unlocking:
amount += unlockingObj.amount
if amount > 0:
return [{"module": MODULE_NAME_POS, "amount": amount}]
else:
return []
def createSupplySubstoreArray():
totalLSKSupply = 0
for every account in accounts:
totalLSKSupply += account.token.balance
lockedBalances = getLockedBalances(account)
if lockedBalances is not empty:
totalLSKSupply += lockedBalances[0].amount
legacyReserveAmount = 0
for every account in legacyAccounts:
legacyReserveAmount += account.token.balance
LSKSupply = {"tokenID": TOKEN_ID_LSK, "totalSupply": totalLSKSupply + legacyReserveAmount}
return [LSKSupply]
Let genesisAuthStoreSchema
be as defined in LIP 0041. The assets
entry for the auth module is added by the logic defined in the function addAuthModuleEntry
in the following pseudo code:
def addAuthModuleEntry():
authDataSubstore = []
for every account in accounts:
authObj = {}
authObj.numberOfSignatures = account.keys.numberOfSignatures
# Sort the keys in the lexicographical order if needed.
authObj.mandatoryKeys = account.keys.mandatoryKeys.sort()
authObj.optionalKeys = account.keys.optionalKeys.sort()
authObj.nonce = account.sequence.nonce
entry = {"address": account.address, "authAccount": authObj}
authDataSubstore.append(entry)
sort authDataSubstore in the lexicographical order of object.address
data = serialization of authDataSubstore using genesisAuthStoreSchema
append {"module": MODULE_NAME_AUTH, "data": data} to b.assets
Let genesisPoSStoreSchema
be as defined in LIP 0057. The assets
entry for the PoS module is added by the logic defined in the function addPoSModuleEntry
in the following pseudo code:
def addPoSModuleEntry():
PoSObj = {}
PoSObj.validators = createValidatorsArray()
PoSObj.stakers = createStakersArray()
PoSObj.genesisData = createGenesisDataObj()
data = serialization of PoSObj using genesisPoSStoreSchema
append {"module": MODULE_NAME_POS, "data": data} to b.assets
def createValidatorsArray():
validators = []
validatorKeys = getValidatorKeys()
for every account in accounts:
if account.dpos.delegate.username == "":
continue
validator = {}
validator.address = account.address
validator.name = account.dpos.delegate.username
validator.blsKey = INVALID_BLS_KEY
validator.proofOfPossession = DUMMY_PROOF_OF_POSSESSION
if validatorKeys[account.address]:
validator.generatorKey = validatorKeys[account.address]
else:
validator.generatorKey = INVALID_ED25519_KEY
validator.lastGeneratedHeight = account.dpos.delegate.lastForgedHeight
validator.isBanned = True
validator.reportMisbehaviorHeights = account.dpos.delegate.pomHeights
validator.consecutiveMissedBlocks = account.dpos.delegate.consecutiveMissedBlocks
validator.commission = 10000
validator.lastCommissionIncreaseHeight = HEIGHT_SNAPSHOT
validator.sharingCoefficients = [
{"tokenID": TOKEN_ID_LSK, "coefficient": Q96_ZERO}
]
validators.append(validator)
sort validators in the lexicographical order of validator.address
return validators
# This functions gets the public keys of the registered validators,
# i.e., the accounts for which account.dpos.delegate.username is not
# the empty string, from the history of Lisk Core v3 blocks and transactions.
def getValidatorKeys():
let keys be an empty dictionary
for every block c with height in [HEIGHT_PREVIOUS_SNAPSHOT_BLOCK + 1, HEIGHT_SNAPSHOT]:
address = address(c.generatorPublicKey)
keys[address] = c.generatorPublicKey
for trs in c.transactions:
address = address(trs.senderPublicKey)
if the validator corresponding to address is a registered validator:
keys[address] = trs.senderPublicKey
return keys
def getStakes(account):
stakes = account.dpos.sentVotes
for every stake in stakes:
stake.sharingCoefficients = [
{"tokenID": TOKEN_ID_LSK, "coefficient": Q96_ZERO}
]
return stakes
def createStakersArray():
stakers = []
for every account in accounts:
if account.dpos.sentVotes == [] and account.dpos.unlocking == []:
continue
staker = {}
staker.address = account.address
staker.stakes = getStakes(account)
staker.pendingUnlocks = account.dpos.unlocking
stakers.append(staker)
sort stakers in the lexicographical order of staker.address
return stakers
def createGenesisDataObj():
genesisDataObj = {}
genesisDataObj.initRounds = POS_INIT_ROUNDS
roundLengthV3 = 103
let topValidators be the set of the top 101 non-banned validator accounts by validator weight at height (HEIGHT_SNAPSHOT - 2 * roundLengthV3)
initValidators = [account.address for account in topValidators]
sort initValidators in lexicographical order
genesisDataObj.initValidators = initValidators
return genesisDataObj
Let genesisInteroperabilityStoreSchema
be as defined in LIP 0045. The assets
entry for the Interoperability module is added by the logic defined in the function addInteropModuleEntry
in the following pseudo code.
def addInteropModuleEntry():
InteropObj = {}
InteropObj.ownChainName = CHAIN_NAME_MAINCHAIN
InteropObj.ownChainNonce = 0
InteropObj.chainInfos = []
InteropObj.terminatedStateAccounts = []
InteropObj.terminatedOutboxAccounts = []
data = serialization of InteropObj using genesisInteroperabilityStoreSchema
append {"module": MODULE_NAME_INTEROPERABILITY, "data": data} to b.assets
The Token module must be registered before the Fee module such that Token.beforeCrossChainCommandExecution
is called before Fee.beforeCrossChainCommandExecution
to ensure that a relayer has enough funds. See also here.
The decision to discard the block history of Lisk Mainnet at the hard fork from Lisk Core v2 to Lisk Core v3 was considered as rather disadvantageous. One reason is that the entire transaction history of an account can not be retrieved anymore from a node running Lisk Core v3. Therefore, the block history of blocks created with Lisk Core v3 shall be kept on Lisk Core v4 nodes.
In order to keep the implementation of Lisk Core v4 as clean as possible, Lisk Core v4 cannot process Lisk Core v3 blocks, but only ensure their integrity. This is, however, sufficient for maintaining the Lisk Core v3 block history and guaranteeing the immutability of this block history.
Once Lisk Core v4 processes the snapshot block, it can fetch the whole Lisk Core v3 block history, where this is done from the highest to the lowest height. This allows that each block can be validated by simply checking if its block ID matches with the previousBlockID
property of the child block, and if the payload matches with the transactionRoot
property. Thus, almost no protocol rules related to Lisk Core v3 must be implemented which keeps the implementation clean. The Lisk Core v3 history can be fetched from a database created by Lisk Core v3 on the same machine or from the peer-to-peer network. Note that the first Lisk Core v3 block - the snapshot block of the previous migration - poses an exception as this one is either downloaded from a server or fetched from a database created by Lisk Core v3 on the same machine.
The snapshot block created for this hard fork as well as for the one used for the previous hard fork is not shared via peer-to-peer network due to their large size. Instead, they are either downloaded from a server or must be found locally.
The initial version of Lisk Core v4 can not download the snapshot block for this hard fork from a server because it could not validate the block. Instead, it must find the snapshot block locally on the same machine. This requires that the snapshot block must be created using a migrator tool on the same node, which in turn requires that the node runs the latest patched version of Lisk Core v3 until the block at height HEIGHT_SNAPSHOT
is final.
Once the snapshot block is final, a patched version of Lisk Core v4 will be released in which the block ID of the snapshot block is hard coded. This allows to validate a downloaded snapshot block by simply checking its block ID.
The URL can be configured. By default, the URL points to a server of the Lisk Foundation. The snapshot block of this hard fork is validated by checking that its block ID matches the hard coded block ID in the patched version of Lisk Core v4. The snapshot block of the previous migration is validated by checking that its block ID matches with the previousBlockID
property of its child block (see also here). Hence, there are no security concerns by downloading them from even a non-trusted server. Users can nevertheless configure their node to download it from a server of their choice. The snapshot blocks could be provided by any user. This may also be helpful in situations where the Lisk Foundation server is down.
In blockchains created with Lisk SDK v6, one needs to register a valid BLS key in order to become a validator. This results in an exception for Lisk Core v4 as the existing validators do not have a valid BLS key right after the hard fork. Since the PoS module expects that each validator account has a BLS key and a proof of possession set in the snapshot block (see here), this will be done by assigning a fixed public key for which signature validation will always fail along with a dummy value for the proof of possession. This proof of possession will never be evaluated as the PoS and the Validator module handle an exception for this particular case (see here and here). Every validator must then register a valid BLS key via a register keys transaction in order to become an active validator, ideally within the bootstrap period described below.
In the protocol used by Lisk Core v4, each validator needs a generator key in order to generate blocks. Moreover, the PoS module expects that each validator account has such a key specified in the snapshot block. Otherwise, the snapshot block is invalid (see here). Therefore, it is aimed to use the existing public keys of the validator accounts as the initial generator keys in the snapshot block. However, the public key does not belong to the account state anymore with Lisk Core v3. For this reason, the whole block history from the previous snapshot block onwards is scanned for public keys of registered validators. The found keys are used as generator keys in the snapshot block. Validator accounts for which no public key was found get an "invalid" public key assigned, i.e., one for which every signature validation will fail. The specific value is discussed in the following subsection.
The register keys transaction does not only allow to set a BLS key but also a new generator key. Hence, every validator has the opportunity to set a new generator key. Notice, however, that this transaction can only be submitted once per validator, and the current version of the PoS module does not provide any update mechanism for the generator key.
The chosen value for the Ed25519 public key for which every signature validation should fail, 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
, is the little endian encoding of 2256-1. As described in the first point of the decoding section of RFC 8032, decoding this value fails because the y
-coordinate value 2255-1 is bigger than p
= 2255-19 (see the notations section for more information on little-endian encoding in EdDSA protocol). This in turn results in signature verification failure as described in the first point of the verification section, regardless of the message and the signature.
There will be a bootstrap period in which a fixed set of validators will constitute the set of active validators. This period will last for one week assuming no missed blocks. This is done by setting the initRounds
property in the snapshot block to the corresponding value, which is given by POS_INIT_ROUNDS
. The reason to have this period is to achieve that the PoS module selects only validators with a valid BLS key outside of the bootstrap period. And this is realized by 1) banning every validator in the snapshot block, and by 2) enabling the unbanning by a register keys transaction. This way, validators are forced to submit a register keys transaction, and therefore to register a valid BLS key and a (new) generator key, in order to become an active validator. Notice that with this approach, the PoS module does not need to check the validity of the BLS key of a validator when selecting the active validators. Therefore, it also does not need to store them.
The bootstrap period allows each validator to submit a register keys transaction in a sufficiently large time window. The expectation is that most validators are unbanned after the bootstrap period. In the unexpected case that there are less than 101 non-banned validators, the PoS module would simply select a smaller set of validators that get the block slots of a round assigned round robin.
This LIP describes the protocol and process for conducting a hardfork on the Lisk mainnet.