You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
This issue is related to relay-chain transaction fees research here.
Motivation
Different types of transactions will have different fee levels. This fee differentiation is used to reflect the different costs in resources incurred by transactions, and to encourage/discourage certain types of transactions. Thus, we need to analyze the resource usage of each type of transaction, to adjust the fees (to be done).
Recommended Steps
To start, we need some way to benchmark transaction execution in runtime. This will help to estimate the costs of Polkadot runtime transactions for users.
Set up some benchmarking tools/primitives
Go through every module and evaluate all the transaction types and their complexity. e.g. writes to storage, complex reads, loops, etc. Identify transactions with bad or unbounded complexity (memory, storage, compute).
I would recommend first starting with some benchmarking warm-up, instead of blindly analyzing read/write/loop counts in modules, and get a feeling of what is actually expensive to do. Rust testlib already has a #[bench] but I couldn't find a good way in it to feed custom input to a function and observe the growth of time. Criterion, on the other hand, has this and looks like a good crate to get some help from.
The above, combined with our ExternalitiesBuilder which is commonly used to simulate a complete substrate runtime for individual testing of srml modules should be enough to add a set of benchmarks to each module.
That all being said, is in the abstraction level of a runtime dispatch function. What we could od alternatively is to start at a micro-benchmark level and first analyze the storage functions individually.
@mattrutherford I heard that you have some experience with doing a similar thing in eth client. Any insight would be great to kickstart this hopefully by the end of the week.
Criterion looks like a good tool for some low level benchmarks. The benchmarking work I am involved in with eth client is focused at a high level. We should consider that storage (IO) 'cost' may vary greatly depending on the state of the db - I'd like to prepare several scenarios to test against. My preference would be to start at a high level, possibly with some system-level benchmarks, but I'd be happy to go with either of your suggestions.
@mattrutherford at any point if you open up a PR here or (more likely) in a separate repo, please link this issue to prevent it from becoming too stale.
This issue is related to relay-chain transaction fees research here.
Motivation
Different types of transactions will have different fee levels. This fee differentiation is used to reflect the different costs in resources incurred by transactions, and to encourage/discourage certain types of transactions. Thus, we need to analyze the resource usage of each type of transaction, to adjust the fees (to be done).
Recommended Steps
To start, we need some way to benchmark transaction execution in runtime. This will help to estimate the costs of Polkadot runtime transactions for users.
Set up some benchmarking tools/primitives
Go through every module and evaluate all the transaction types and their complexity. e.g. writes to storage, complex reads, loops, etc. Identify transactions with bad or unbounded complexity (memory, storage, compute).
Refactor code to improve the bounds ^
cc: @keorn @kianenigma @mattrutherford
The text was updated successfully, but these errors were encountered: