Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Determine transaction (fee) weights by transaction types #2431

Closed
soulofamachine opened this issue Apr 30, 2019 · 3 comments · Fixed by #3157
Closed

Determine transaction (fee) weights by transaction types #2431

soulofamachine opened this issue Apr 30, 2019 · 3 comments · Fixed by #3157
Assignees
Labels
U2-some_time_soon Issue is worth doing soon.
Milestone

Comments

@soulofamachine
Copy link
Contributor

This issue is related to relay-chain transaction fees research here.

Motivation
Different types of transactions will have different fee levels. This fee differentiation is used to reflect the different costs in resources incurred by transactions, and to encourage/discourage certain types of transactions. Thus, we need to analyze the resource usage of each type of transaction, to adjust the fees (to be done).

Recommended Steps

To start, we need some way to benchmark transaction execution in runtime. This will help to estimate the costs of Polkadot runtime transactions for users.

  1. Set up some benchmarking tools/primitives

  2. Go through every module and evaluate all the transaction types and their complexity. e.g. writes to storage, complex reads, loops, etc. Identify transactions with bad or unbounded complexity (memory, storage, compute).

  3. Refactor code to improve the bounds ^

cc: @keorn @kianenigma @mattrutherford

@soulofamachine soulofamachine added the U2-some_time_soon Issue is worth doing soon. label Apr 30, 2019
@kianenigma
Copy link
Contributor

I would recommend first starting with some benchmarking warm-up, instead of blindly analyzing read/write/loop counts in modules, and get a feeling of what is actually expensive to do. Rust testlib already has a #[bench] but I couldn't find a good way in it to feed custom input to a function and observe the growth of time. Criterion, on the other hand, has this and looks like a good crate to get some help from.

The above, combined with our ExternalitiesBuilder which is commonly used to simulate a complete substrate runtime for individual testing of srml modules should be enough to add a set of benchmarks to each module.

That all being said, is in the abstraction level of a runtime dispatch function. What we could od alternatively is to start at a micro-benchmark level and first analyze the storage functions individually.

@mattrutherford I heard that you have some experience with doing a similar thing in eth client. Any insight would be great to kickstart this hopefully by the end of the week.

@mattrutherford
Copy link
Contributor

Criterion looks like a good tool for some low level benchmarks. The benchmarking work I am involved in with eth client is focused at a high level. We should consider that storage (IO) 'cost' may vary greatly depending on the state of the db - I'd like to prepare several scenarios to test against. My preference would be to start at a high level, possibly with some system-level benchmarks, but I'd be happy to go with either of your suggestions.

@kianenigma
Copy link
Contributor

@mattrutherford at any point if you open up a PR here or (more likely) in a separate repo, please link this issue to prevent it from becoming too stale.

@soulofamachine soulofamachine added this to the 2.0-kusama milestone Jun 3, 2019
@kianenigma kianenigma mentioned this issue Jul 21, 2019
6 tasks
@gnunicorn gnunicorn modified the milestones: 2.1, Polkadot Mar 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
U2-some_time_soon Issue is worth doing soon.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants