-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Conversation
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fast-fix!
This should indeed remove excessive over-estimation in its current state.
(This is almost right but not totally right, because the values we're making regression analysis on are not per-key benchmark results but originated from benchmark results for the all the keys and then adjusted for a single key, see below for details)
Yep, this is in here as follow-up #11637. Currently we dont have all the data, so it tries to extrapolate.
Taking just maximum over all keys is still imprecise because it counts for the pov_overhead added by a single key (okay maybe from two: one for base other for slope), and doesn't count for those from all other keys. That being said, the number of key reads most likely correlates with complexity parameter(s), hence to make a proper approximation we'd need to benchmark per-key, which is currently not implemented. Is that right?
I dont quite get how you mean the first part. We have to count overhead of every key once. So if two keys are read, then we have to count the overhead twice since in the worst case these keys dont share any common nodes in the proof. About the second part: yes. Currently we just measure the proof size of the whole thing (per component). It would be better to reset the proof recorder after every key access to track the size of each key. But in this case we would only tweak the result downward, hence why I did not prioritize it. |
This is not what happening in the current implementation. We currently calculate the overhead for a key for each benchmark, add it to proof_size of each benchmark, then approximate over those results ending up in base and slope for a single key. Then repeat this for all other keys. Therefore we end up having maximum base among all keys, but it could be larger if it had counted overhead for more than single key. But it hadn't, it counts overhead only for a single key. Same for slopes. |
bot bench $ all |
@ggwpez https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2652740 was started for your command Comment |
@ggwpez Command |
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
@agryaznov the proof size comparison is here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks okay as a temp hotfix until https://github.com/paritytech/substrate/issues/13808 is done.
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Fucked up the merge for this file... Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
* Align log Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Use max instead of sum Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Make comment ordering deterministic Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Dont add Pov overhead when all is ignored Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Update test pallet weights Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Re-run weights on bm2 Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Fix test Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Actually use new weights Fucked up the merge for this file... Signed-off-by: Oliver Tale-Yazdi <[email protected]> * Update contract weights Signed-off-by: Oliver Tale-Yazdi <[email protected]> --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Closes #13765
Yep, this is in here as follow-up #11637. Currently we dont have all the data, so it tries to extrapolate.
TODOs:
bm2
right now)