-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the Kusama Coretime Chain runtime #212
Add the Kusama Coretime Chain runtime #212
Conversation
7c18139
to
167860c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we start with an uninitialized broker configuration and then set it via governance?
/// Proof: `ParachainSystem::ValidationData` (`max_values`: Some(1), `max_size`: None, mode: `Measured`) | ||
/// Storage: `ParachainSystem::LastRelayChainBlockNumber` (r:1 w:0) | ||
/// Proof: `ParachainSystem::LastRelayChainBlockNumber` (`max_values`: Some(1), `max_size`: None, mode: `Measured`) | ||
fn set_lease() -> Weight { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weights look good, Relay chain configuration:
require_weight_at_most: Weight::from_parts(170_000_000, 20_000),
worst case call is reserve with 13k ... still fits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are copied over from Rococo, so there may be some changes when I run bench bot.
I've been putting together some emulated tests for the relay calls in the coretime chain in case things change drastically any time the weights are regenerated, might be worth having a weight sanity check for these too.
>; | ||
|
||
/// Means for transacting coretime regions on this chain. | ||
pub type RegionTransactor = NonFungibleAdapter< |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have an emulated test for this transactor. At least before this chain becomes fully operational
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leadin seems off.
fn filter(&self, c: &RuntimeCall) -> bool { | ||
match self { | ||
ProxyType::Any => true, | ||
ProxyType::NonTransfer => !matches!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blacklisting instead of whitelisting is extremely brittle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's implemented this way in asset-hub
s and collectives
as the concept itself is a blacklist. I see what you're saying though, that adding a new pallet which contains a transfer-like extrinsic without updating this would be an issue. The same issue applies the other way though, if you whitelist a whole pallet (as is commonly done) and a new extrinsic is added it must be changed here too.
For me I don't mind either way, this is just less code. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just flagging, that I think it is a bad idea. Should probably be a ticket on its own.
AFAIK there is just one more thing to address here, viz. handling the burning of Coretime revenue. There is a branch here that puts the revenue in a temp pot that we can handle "properly" in the next runtime upgrade. The reason we don't burn funds directly on each sale is that the Relay Chain's IMO the correct solution is to accumulate funds in some temp pot and every period (session, day, sales period, etc.) we send an XCM program to the Relay Chain that will teleport the assets (to withdraw them from Donal suggested the above branch as a way to hold funds until the next upgrade handles this XCM program. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please apply Cargo.toml suggestions which should make it compilable
Fixed here: seadanda#2 |
Co-authored-by: Branislav Kontur <[email protected]>
also I am investigating one failed benchmarks with new weights will be fixed here: seadanda#2
|
fixed by seadanda@648edde |
/// if cores sell out) to avoid runaway prices in the early sales. The intention is that this will | ||
/// be coupled with a low number of cores per sale and a 100% ideal bulk ratio for the first sales. | ||
pub struct PriceAdapter; | ||
impl AdaptPrice for PriceAdapter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the implementation of the price adapter is different from the ones utilized in Rococo and Westend, it becomes even more important to expose a runtime api for querying the price. This will prevent mistakes in offchain code that is connecting to the Coretime chain: paritytech/polkadot-sdk#3485
} | ||
|
||
/// A call filter for the XCM Transact instruction. This is a temporary measure until we properly | ||
/// account for proof size weights. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That should be resolved by using the new mq pallet or @ggwpez?
* Add burn onrevenue impl (WIP) * Define burn account into which revenue will be moved * Add test (WIP) * Fix test
/merge |
Enabled Available commands
For more information see the documentation |
640a001
into
polkadot-fellows:main
In the update to [sdk 1.14](#381) the `request_revenue_info` is now implemented but this call had been switched off when it was added in #212. This PR just enables this again in the coretime interface. cc @ggwpez @s0me0ne-unkn0wn
This is similar to the two chains we deployed to Rococo and Westend, with a different
PriceAdapter
implementation. This is likely to be the most contentious addition here and is obviously not final, but a start until the correct function and parametrisation can be decided upon. Discussion is welcome.I have also considered adding a minimum factor in
adapt_price
to ensure that we never actually hit 0. Due to the price corrections being purely multiplicative, if the price ever hits zero there is no going back, which I think should be configurable to avoid. I would be interested to hear people's thoughts on this. I can also add this minimum value to the broker pallet upstream (which I think is probably the best place for that).Due to the Transact XCMs in the
CoretimeInterface
, there is a need for hardcoded weights for several relay chain calls. These should be checked after benchmarking the relay chain. There are hardcoded weights on the Relay Chain which should also be checked when the Coretime Chain is benchmarked. XCM emulator tests are being put together to ensure this happens automatically.TODO:
OnRevenue
to burn revenue from Coretime (RFC-010) WIP: here.Pricing model
The
AdaptPrice
implementation in this PR is designed together with a guideline configuration to provide conservative limits in the early sales to avoid high volatility. I have listed the suggested configuration and outlined the reasoning below.tl;dr:
Price adapter behaviour:
leadin_factor_at
: linear with a max of 5xadapt_price
: linear with conservative limits, maximum factor not relevant for our ideal bulk proportion settings, but set to 1.2 as a conservative limit in case we want to change config without a runtime upgrade.Start sales parameters:
initialPrice
: ~5 KSMcoreCount
: 3Configuration parameters:
advanceNotice
: 10 (1 min)interludeLength
: 50400 (7 days)leadinLength
: 50400 (7 days)regionLength
: 5040 (28 days)idealBulkProportion
: 1000000000 (100% for the first few sales)limitCoresOffered
: 3renewalBump
: 30000000 (3% per sale / ~47% annually)contributionTimeout
: 5040 (28 days)Price adapter reasoning
For the price adapter the main goals are to avoid frontrunning, while also minimising volatility in the early sales. The assumption is that the price will take time to stabilise, and with large corrections we end up flip flopping between values far above and far below the true market price, which will take longer to stabilise the higher those values are between sales. We assume few cores per sale and take into consideration the fact that most cores are occupied by a lease.
The decision to go for a higher lead in factor and a very conservative price adapter are due to the expectation that the early sales will be set such that all cores are sold. A high lead-in max factor allows people to buy at the price they feel comfortable, which affects the market price. This in combination with a conservative price adapter (at the start with no upward correction) means that the price that people are willing to pay is used in the next sale when the cores are sold out, and is adjusted downwards if not all cores are sold. In the case where no cores are sold a minimum factor of 0.5 is applied to gradually bring the price down until a price is found, rather than having something symmetrical which snaps up then can rebound right back down in the extreme case.
Start sales reasoning
This is purely for the first sale, after which the price adapter takes over.
We suggest a starting value between ~4.4 KSM and ~36.8 KSM depending on the figure we assume for the utilisation. These values represent those for the average (mean) of 13% and for maximum utilisation of 100% respectively. We estimate that the true market price lies between these two values, and so we suggest starting from the average (mean) with a lead-in factor of 5.
This lets the market decide between guardrails of ~13%-65% utilisation in the first sale. We then round up the suggestion to 5 KSM to acknowledge the uncertainty around this value.
This is based on the model set out in Erin's proposal and (also Erin's) Kusama tweak to reflect utilisation.
Configuration reasoning
advanceNotice
: 10 (1 min) - We require ~10 blocks to send notifyCore messages for 100 cores if the max is 10 XCM messages per block. Note 6s blocksinterludeLength
: 50400 (7 days) - gives people ample time to renew without reducing sale too much. Note 12s blocksleadinLength
: 50400 (7 days) - price drops for 7 days before becoming fixed, then is fixed for just 14 days before we hit the next region. Note 12s blocksregionLength
: 5040 (28 days) - this is in timeslices, which are set to 80 blocks in the coretime-kusama runtime. These blocks also are relay chain blocks (6s)idealBulkProportion
: 1000000000 or 400000000 - 100% paired with a small core offering or 40-60% when more people are off their leases and there's a more fluid market with a higher core limit later on - this needs to be modeled before being updated after the first few saleslimitCoresOffered
: 3 initially - This should replace the auctions that were cancelled (coupled with 100% target means we don't cause price spikes)renewalBump
: 30000000 - 3% every 28 days equates to around 47% annuallycontributionTimeout
: 5040 (28 days) - not much thought behind this one, 1 region to collect seems like a good starting point. This is not relevant until the Coretime Credits are implemented on the relay chain.