-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for LEB128 compression of feature transformer parameters. #251
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jun 22, 2023
Created by retraining the sparsified master net (nn-cd2ff4716c34.nnue) on a 100% minified dataset including Leela transformers data from T80 may2023. Weights permuted with the exact methods and code in: official-stockfish#4620 LEB128 compression done with the new serialize.py param in: official-stockfish/nnue-pytorch#251 Initially trained with max epoch 800. Around epoch 780, training was paused and max epoch raised to 960. python3 easy_train.py \ --experiment-name L1-1536-sparse-master-retrain \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack \ --early-fen-skipping 27 \ --start-from-engine-test-net True \ --max_epoch 960 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 For preparing the training dataset (interleaved size 328G): python3 interleave_binpacks.py \ leela96-filt-v2.min.binpack \ dfrc99-16tb7p-eval-filt-v2.min.binpack \ filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test77-dec2021-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-apr2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-may2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jul2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-oct2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-nov2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jan2023-16tb7p-filter-v6-dd.min.binpack \ test80-2023/test80-feb2023-16tb7p-no-db.min.binpack \ test80-2023/test80-mar2023-2tb7p-no-db.min.binpack \ test80-2023/test80-apr2023-2tb7p-no-db.min.binpack \ test80-2023/test80-may2023-2tb7p-no-db.min.binpack \ /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack Minified binpacks and Leela T80 training data from 2023 available at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch879.nnue : 3.9 +/- 5.7 Passed STC: https://tests.stockfishchess.org/tests/view/64928c1bdc7002ce609c7690 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 72000 W: 19242 L: 18889 D: 33869 Ptnml(0-2): 182, 7787, 19716, 8126, 189 Passed LTC: https://tests.stockfishchess.org/tests/view/64930a37dc7002ce609c82e3 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 54552 W: 14978 L: 14647 D: 24927 Ptnml(0-2): 23, 5123, 16650, 5460, 20 bench 2593605
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
Jun 22, 2023
Created by retraining the sparsified master net (nn-cd2ff4716c34.nnue) on a 100% minified dataset including Leela transformers data from T80 may2023. Weights permuted with the exact methods and code in: official-stockfish#4620 LEB128 compression done with the new serialize.py param in: official-stockfish/nnue-pytorch#251 Initially trained with max epoch 800. Around epoch 780, training was paused and max epoch raised to 960. python3 easy_train.py \ --experiment-name L1-1536-sparse-master-retrain \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack \ --early-fen-skipping 27 \ --start-from-engine-test-net True \ --max_epoch 960 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 For preparing the training dataset (interleaved size 328G): python3 interleave_binpacks.py \ leela96-filt-v2.min.binpack \ dfrc99-16tb7p-eval-filt-v2.min.binpack \ filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test77-dec2021-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-apr2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-may2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jul2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-oct2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-nov2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jan2023-16tb7p-filter-v6-dd.min.binpack \ test80-2023/test80-feb2023-16tb7p-no-db.min.binpack \ test80-2023/test80-mar2023-2tb7p-no-db.min.binpack \ test80-2023/test80-apr2023-2tb7p-no-db.min.binpack \ test80-2023/test80-may2023-2tb7p-no-db.min.binpack \ /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack Minified binpacks and Leela T80 training data from 2023 available at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch879.nnue : 3.9 +/- 5.7 Passed STC: https://tests.stockfishchess.org/tests/view/64928c1bdc7002ce609c7690 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 72000 W: 19242 L: 18889 D: 33869 Ptnml(0-2): 182, 7787, 19716, 8126, 189 Passed LTC: https://tests.stockfishchess.org/tests/view/64930a37dc7002ce609c82e3 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 54552 W: 14978 L: 14647 D: 24927 Ptnml(0-2): 23, 5123, 16650, 5460, 20 closes official-stockfish#4635 bench 2593605
Joachim26
pushed a commit
to Joachim26/StockfishNPS
that referenced
this pull request
Jun 22, 2023
Created by retraining the sparsified master net (nn-cd2ff4716c34.nnue) on a 100% minified dataset including Leela transformers data from T80 may2023. Weights permuted with the exact methods and code in: official-stockfish#4620 LEB128 compression done with the new serialize.py param in: official-stockfish/nnue-pytorch#251 Initially trained with max epoch 800. Around epoch 780, training was paused and max epoch raised to 960. python3 easy_train.py \ --experiment-name L1-1536-sparse-master-retrain \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack \ --early-fen-skipping 27 \ --start-from-engine-test-net True \ --max_epoch 960 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 For preparing the training dataset (interleaved size 328G): python3 interleave_binpacks.py \ leela96-filt-v2.min.binpack \ dfrc99-16tb7p-eval-filt-v2.min.binpack \ filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test77-dec2021-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-apr2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-may2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jul2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-oct2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-nov2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jan2023-16tb7p-filter-v6-dd.min.binpack \ test80-2023/test80-feb2023-16tb7p-no-db.min.binpack \ test80-2023/test80-mar2023-2tb7p-no-db.min.binpack \ test80-2023/test80-apr2023-2tb7p-no-db.min.binpack \ test80-2023/test80-may2023-2tb7p-no-db.min.binpack \ /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack Minified binpacks and Leela T80 training data from 2023 available at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch879.nnue : 3.9 +/- 5.7 Passed STC: https://tests.stockfishchess.org/tests/view/64928c1bdc7002ce609c7690 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 72000 W: 19242 L: 18889 D: 33869 Ptnml(0-2): 182, 7787, 19716, 8126, 189 Passed LTC: https://tests.stockfishchess.org/tests/view/64930a37dc7002ce609c82e3 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 54552 W: 14978 L: 14647 D: 24927 Ptnml(0-2): 23, 5123, 16650, 5460, 20 closes official-stockfish#4635 bench 2593605
rn5f107s2
pushed a commit
to rn5f107s2/Stockfish
that referenced
this pull request
Jun 22, 2023
Created by retraining the sparsified master net (nn-cd2ff4716c34.nnue) on a 100% minified dataset including Leela transformers data from T80 may2023. Weights permuted with the exact methods and code in: official-stockfish#4620 LEB128 compression done with the new serialize.py param in: official-stockfish/nnue-pytorch#251 Initially trained with max epoch 800. Around epoch 780, training was paused and max epoch raised to 960. python3 easy_train.py \ --experiment-name L1-1536-sparse-master-retrain \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack \ --early-fen-skipping 27 \ --start-from-engine-test-net True \ --max_epoch 960 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 For preparing the training dataset (interleaved size 328G): python3 interleave_binpacks.py \ leela96-filt-v2.min.binpack \ dfrc99-16tb7p-eval-filt-v2.min.binpack \ filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test77-dec2021-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-apr2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-may2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jul2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-oct2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-nov2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jan2023-16tb7p-filter-v6-dd.min.binpack \ test80-2023/test80-feb2023-16tb7p-no-db.min.binpack \ test80-2023/test80-mar2023-2tb7p-no-db.min.binpack \ test80-2023/test80-apr2023-2tb7p-no-db.min.binpack \ test80-2023/test80-may2023-2tb7p-no-db.min.binpack \ /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack Minified binpacks and Leela T80 training data from 2023 available at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch879.nnue : 3.9 +/- 5.7 Passed STC: https://tests.stockfishchess.org/tests/view/64928c1bdc7002ce609c7690 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 72000 W: 19242 L: 18889 D: 33869 Ptnml(0-2): 182, 7787, 19716, 8126, 189 Passed LTC: https://tests.stockfishchess.org/tests/view/64930a37dc7002ce609c82e3 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 54552 W: 14978 L: 14647 D: 24927 Ptnml(0-2): 23, 5123, 16650, 5460, 20 closes official-stockfish#4635 bench 2593605
Zerbinati
added a commit
to Zerbinati/SugaR-XPrO
that referenced
this pull request
Jun 22, 2023
Created by retraining the sparsified master net (nn-cd2ff4716c34.nnue) on a 100% minified dataset including Leela transformers data from T80 may2023. Weights permuted with the exact methods and code in: #4620 LEB128 compression done with the new serialize.py param in: official-stockfish/nnue-pytorch#251 Initially trained with max epoch 800. Around epoch 780, training was paused and max epoch raised to 960. python3 easy_train.py \ --experiment-name L1-1536-sparse-master-retrain \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack \ --early-fen-skipping 27 \ --start-from-engine-test-net True \ --max_epoch 960 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 For preparing the training dataset (interleaved size 328G): python3 interleave_binpacks.py \ leela96-filt-v2.min.binpack \ dfrc99-16tb7p-eval-filt-v2.min.binpack \ filt-v6-dd-min/test60-novdec2021-12tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test77-dec2021-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-jantomay2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test78-juntosep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-apr2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test79-may2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jun2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jul2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-aug2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-sep2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-oct2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-nov2022-16tb7p-filter-v6-dd.min.binpack \ filt-v6-dd-min/test80-jan2023-16tb7p-filter-v6-dd.min.binpack \ test80-2023/test80-feb2023-16tb7p-no-db.min.binpack \ test80-2023/test80-mar2023-2tb7p-no-db.min.binpack \ test80-2023/test80-apr2023-2tb7p-no-db.min.binpack \ test80-2023/test80-may2023-2tb7p-no-db.min.binpack \ /data/leela96-dfrc99-v2-T60novdecT77decT78jantosepT79aprmayT80juntonovjan-v6dd-T80febtomay2023.min.binpack Minified binpacks and Leela T80 training data from 2023 available at: https://robotmoon.com/nnue-training-data/ Local elo at 25k nodes per move: nn-epoch879.nnue : 3.9 +/- 5.7 Passed STC: https://tests.stockfishchess.org/tests/view/64928c1bdc7002ce609c7690 LLR: 2.94 (-2.94,2.94) <0.00,2.00> Total: 72000 W: 19242 L: 18889 D: 33869 Ptnml(0-2): 182, 7787, 19716, 8126, 189 Passed LTC: https://tests.stockfishchess.org/tests/view/64930a37dc7002ce609c82e3 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 54552 W: 14978 L: 14647 D: 24927 Ptnml(0-2): 23, 5123, 16650, 5460, 20 closes #4635 bench 2593605 master stockfish-dev-20230622-a49b3ba7 @linrock @vondele linrock authored
looks good to me, can this be merged? |
yes, this is final |
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jun 29, 2023
Creating this net involved: - a 5-step training process from scratch - greedy permuting L1 weights with official-stockfish#4620 - leb128 compression with official-stockfish/nnue-pytorch#251 - greedy 2- and 3- cycle permuting with official-stockfish#4640 The 5 training steps were: 1. 400 epochs, lambda 1.0, lr 9.75e-4 UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9.binpack (178G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack large_gensfen_multipvdiff_100_d9.binpack ep399 chosen as start model for step2 2. 800 epochs, end-lambda 0.75, skip 16 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack ep559 chosen as start model for step3 3. 800 epochs, end-lambda 0.725, skip 20 leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr.binpack (223G) leela96-filt-v2.min.binpack dfrc99-16tb7p-eval-filt-v2.min.binpack test80-dec2022-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-jan2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-feb2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-mar2023-2tb7p-filter-v6.min.binpack test77-dec2021-16tb7p.no-db.min.binpack test78-janfeb2022-16tb7p.no-db.min.binpack test79-apr2022-16tb7p.no-db.min.binpack ep499 chosen as start model for step4 4. 800 epochs, end-lambda 0.7, skip 24 0dd1cebea57 dataset official-stockfish#4606 ep599 chosen as start model for step5 5. 800 epochs, end-lambda 0.7, skip 28 same dataset as step4 ep619 became nn-1b951f8b449d.nnue For the final step5 training: python3 easy_train.py \ --experiment-name L1-2048-S5-sameData-sk28-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9 \ --training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \ --early-fen-skipping 28 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-2048 \ --engine-test-branch linrock/Stockfish/L1-2048 \ --start-from-engine-test-net False \ --start-from-model /data/experiments/experiment_L1-2048-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9/training/run_0/nn-epoch599.nnue --max_epoch 800 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 SF training data components for the step1 dataset: https://drive.google.com/drive/folders/1yLCEmioC3Xx9KQr4T7uB6GnLm5icAYGU Leela training data for steps 2-5 can be found at: https://robotmoon.com/nnue-training-data/ Due to larger L1 size and slower inference, the speed penalty loses elo at STC. Measurements from 100 bench runs at depth 13 with x86-64-modern on Intel Core i5-1038NG7 2.00GHz: sf_base = 1240730 +/- 3443 (95%) sf_test = 1153341 +/- 2832 (95%) diff = -87388 +/- 1616 (95%) speedup = -7.04330% +/- 0.130% (95%) Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch619.nnue : 21.1 +/- 3.2 Failed STC: https://tests.stockfishchess.org/tests/view/6498ee93dc7002ce609cf979 LLR: -2.95 (-2.94,2.94) <0.00,2.00> Total: 11680 W: 3058 L: 3299 D: 5323 Ptnml(0-2): 44, 1422, 3149, 1181, 44 LTC: https://tests.stockfishchess.org/tests/view/649b32f5dc7002ce609d20cf Elo: 0.68 ± 1.5 (95%) LOS: 80.5% Total: 40000 W: 10887 L: 10809 D: 18304 Ptnml(0-2): 36, 3938, 11958, 4048, 20 nElo: 1.50 ± 3.4 (95%) PairsRatio: 1.02 Passed VLTC 180+1.8: https://tests.stockfishchess.org/tests/view/64992b43dc7002ce609cfd20 LLR: 3.06 (-2.94,2.94) <0.00,2.00> Total: 38086 W: 10612 L: 10338 D: 17136 Ptnml(0-2): 9, 3316, 12115, 3598, 5 Passed VLTC SMP 60+0.6 th 8: https://tests.stockfishchess.org/tests/view/649a21fedc7002ce609d0c7d LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 38936 W: 11091 L: 10820 D: 17025 Ptnml(0-2): 1, 2948, 13305, 3207, 7 bench 2858591
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jun 29, 2023
Creating this net involved: - a 5-step training process from scratch - greedy permuting L1 weights with official-stockfish#4620 - leb128 compression with official-stockfish/nnue-pytorch#251 - greedy 2- and 3- cycle permuting with official-stockfish#4640 The 5 training steps were: 1. 400 epochs, lambda 1.0, lr 9.75e-4 UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9.binpack (178G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack large_gensfen_multipvdiff_100_d9.binpack ep399 chosen as start model for step2 2. 800 epochs, end-lambda 0.75, skip 16 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack ep559 chosen as start model for step3 3. 800 epochs, end-lambda 0.725, skip 20 leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr.binpack (223G) leela96-filt-v2.min.binpack dfrc99-16tb7p-eval-filt-v2.min.binpack test80-dec2022-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-jan2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-feb2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-mar2023-2tb7p-filter-v6.min.binpack test77-dec2021-16tb7p.no-db.min.binpack test78-janfeb2022-16tb7p.no-db.min.binpack test79-apr2022-16tb7p.no-db.min.binpack ep499 chosen as start model for step4 4. 800 epochs, end-lambda 0.7, skip 24 0dd1cebea57 dataset official-stockfish#4606 ep599 chosen as start model for step5 5. 800 epochs, end-lambda 0.7, skip 28 same dataset as step4 ep619 became nn-1b951f8b449d.nnue For the final step5 training: python3 easy_train.py \ --experiment-name L1-2048-S5-sameData-sk28-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9 \ --training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \ --early-fen-skipping 28 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-2048 \ --engine-test-branch linrock/Stockfish/L1-2048 \ --start-from-engine-test-net False \ --start-from-model /data/experiments/experiment_L1-2048-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9/training/run_0/nn-epoch599.nnue --max_epoch 800 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 SF training data components for the step1 dataset: https://drive.google.com/drive/folders/1yLCEmioC3Xx9KQr4T7uB6GnLm5icAYGU Leela training data for steps 2-5 can be found at: https://robotmoon.com/nnue-training-data/ Due to larger L1 size and slower inference, the speed penalty loses elo at STC. Measurements from 100 bench runs at depth 13 with x86-64-modern on Intel Core i5-1038NG7 2.00GHz: sf_base = 1240730 +/- 3443 (95%) sf_test = 1153341 +/- 2832 (95%) diff = -87388 +/- 1616 (95%) speedup = -7.04330% +/- 0.130% (95%) Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch619.nnue : 21.1 +/- 3.2 Failed STC: https://tests.stockfishchess.org/tests/view/6498ee93dc7002ce609cf979 LLR: -2.95 (-2.94,2.94) <0.00,2.00> Total: 11680 W: 3058 L: 3299 D: 5323 Ptnml(0-2): 44, 1422, 3149, 1181, 44 LTC: https://tests.stockfishchess.org/tests/view/649b32f5dc7002ce609d20cf Elo: 0.68 ± 1.5 (95%) LOS: 80.5% Total: 40000 W: 10887 L: 10809 D: 18304 Ptnml(0-2): 36, 3938, 11958, 4048, 20 nElo: 1.50 ± 3.4 (95%) PairsRatio: 1.02 Passed VLTC 180+1.8: https://tests.stockfishchess.org/tests/view/64992b43dc7002ce609cfd20 LLR: 3.06 (-2.94,2.94) <0.00,2.00> Total: 38086 W: 10612 L: 10338 D: 17136 Ptnml(0-2): 9, 3316, 12115, 3598, 5 Passed VLTC SMP 60+0.6 th 8: https://tests.stockfishchess.org/tests/view/649a21fedc7002ce609d0c7d LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 38936 W: 11091 L: 10820 D: 17025 Ptnml(0-2): 1, 2948, 13305, 3207, 7 bench 2858591
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
Jul 1, 2023
Creating this net involved: - a 5-step training process from scratch - greedy permuting L1 weights with official-stockfish#4620 - leb128 compression with official-stockfish/nnue-pytorch#251 - greedy 2- and 3- cycle permuting with official-stockfish#4640 The 5 training steps were: 1. 400 epochs, lambda 1.0, lr 9.75e-4 UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9.binpack (178G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack large_gensfen_multipvdiff_100_d9.binpack ep399 chosen as start model for step2 2. 800 epochs, end-lambda 0.75, skip 16 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack ep559 chosen as start model for step3 3. 800 epochs, end-lambda 0.725, skip 20 leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr.binpack (223G) leela96-filt-v2.min.binpack dfrc99-16tb7p-eval-filt-v2.min.binpack test80-dec2022-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-jan2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-feb2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-mar2023-2tb7p-filter-v6.min.binpack test77-dec2021-16tb7p.no-db.min.binpack test78-janfeb2022-16tb7p.no-db.min.binpack test79-apr2022-16tb7p.no-db.min.binpack ep499 chosen as start model for step4 4. 800 epochs, end-lambda 0.7, skip 24 0dd1cebea57 dataset official-stockfish#4606 ep599 chosen as start model for step5 5. 800 epochs, end-lambda 0.7, skip 28 same dataset as step4 ep619 became nn-1b951f8b449d.nnue For the final step5 training: python3 easy_train.py \ --experiment-name L1-2048-S5-sameData-sk28-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9 \ --training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \ --early-fen-skipping 28 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-2048 \ --engine-test-branch linrock/Stockfish/L1-2048 \ --start-from-engine-test-net False \ --start-from-model /data/experiments/experiment_L1-2048-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9/training/run_0/nn-epoch599.nnue --max_epoch 800 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 SF training data components for the step1 dataset: https://drive.google.com/drive/folders/1yLCEmioC3Xx9KQr4T7uB6GnLm5icAYGU Leela training data for steps 2-5 can be found at: https://robotmoon.com/nnue-training-data/ Due to larger L1 size and slower inference, the speed penalty loses elo at STC. Measurements from 100 bench runs at depth 13 with x86-64-modern on Intel Core i5-1038NG7 2.00GHz: sf_base = 1240730 +/- 3443 (95%) sf_test = 1153341 +/- 2832 (95%) diff = -87388 +/- 1616 (95%) speedup = -7.04330% +/- 0.130% (95%) Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch619.nnue : 21.1 +/- 3.2 Failed STC: https://tests.stockfishchess.org/tests/view/6498ee93dc7002ce609cf979 LLR: -2.95 (-2.94,2.94) <0.00,2.00> Total: 11680 W: 3058 L: 3299 D: 5323 Ptnml(0-2): 44, 1422, 3149, 1181, 44 LTC: https://tests.stockfishchess.org/tests/view/649b32f5dc7002ce609d20cf Elo: 0.68 ± 1.5 (95%) LOS: 80.5% Total: 40000 W: 10887 L: 10809 D: 18304 Ptnml(0-2): 36, 3938, 11958, 4048, 20 nElo: 1.50 ± 3.4 (95%) PairsRatio: 1.02 Passed VLTC 180+1.8: https://tests.stockfishchess.org/tests/view/64992b43dc7002ce609cfd20 LLR: 3.06 (-2.94,2.94) <0.00,2.00> Total: 38086 W: 10612 L: 10338 D: 17136 Ptnml(0-2): 9, 3316, 12115, 3598, 5 Passed VLTC SMP 60+0.6 th 8: https://tests.stockfishchess.org/tests/view/649a21fedc7002ce609d0c7d LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 38936 W: 11091 L: 10820 D: 17025 Ptnml(0-2): 1, 2948, 13305, 3207, 7 closes official-stockfish#4646 Bench: 2505168
linrock
added a commit
to linrock/Stockfish
that referenced
this pull request
Jul 6, 2023
This was a later epoch from the same experiment that led to the previous master net. After training, it was prepared the same way: 1. greedy permuting L1 weights with official-stockfish#4620 2. leb128 compression with official-stockfish/nnue-pytorch#251 3. greedy 2- and 3- cycle permuting with official-stockfish#4640 Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch739.nnue : 20.2 +/- 1.7 Passed STC: https://tests.stockfishchess.org/tests/view/64a050b33ee09aa549c4e4c8 LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 195552 W: 49977 L: 49430 D: 96145 Ptnml(0-2): 556, 22775, 50607, 23242, 596 Passed LTC: https://tests.stockfishchess.org/tests/view/64a127bd3ee09aa549c4f60c LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 235452 W: 60327 L: 59609 D: 115516 Ptnml(0-2): 119, 25173, 66426, 25887, 121 bench 2427629
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
Jul 6, 2023
This was a later epoch from the same experiment that led to the previous master net. After training, it was prepared the same way: 1. greedy permuting L1 weights with official-stockfish#4620 2. leb128 compression with official-stockfish/nnue-pytorch#251 3. greedy 2- and 3- cycle permuting with official-stockfish#4640 Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch739.nnue : 20.2 +/- 1.7 Passed STC: https://tests.stockfishchess.org/tests/view/64a050b33ee09aa549c4e4c8 LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 195552 W: 49977 L: 49430 D: 96145 Ptnml(0-2): 556, 22775, 50607, 23242, 596 Passed LTC: https://tests.stockfishchess.org/tests/view/64a127bd3ee09aa549c4f60c LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 235452 W: 60327 L: 59609 D: 115516 Ptnml(0-2): 119, 25173, 66426, 25887, 121 bench 2427629
vondele
pushed a commit
to vondele/Stockfish
that referenced
this pull request
Jul 6, 2023
This was a later epoch from the same experiment that led to the previous master net. After training, it was prepared the same way: 1. greedy permuting L1 weights with official-stockfish#4620 2. leb128 compression with official-stockfish/nnue-pytorch#251 3. greedy 2- and 3- cycle permuting with official-stockfish#4640 Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch739.nnue : 20.2 +/- 1.7 Passed STC: https://tests.stockfishchess.org/tests/view/64a050b33ee09aa549c4e4c8 LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 195552 W: 49977 L: 49430 D: 96145 Ptnml(0-2): 556, 22775, 50607, 23242, 596 Passed LTC: https://tests.stockfishchess.org/tests/view/64a127bd3ee09aa549c4f60c LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 235452 W: 60327 L: 59609 D: 115516 Ptnml(0-2): 119, 25173, 66426, 25887, 121 closes official-stockfish#4666 bench 2427629
Zerbinati
added a commit
to Zerbinati/SugaR-XPrO
that referenced
this pull request
Jul 9, 2023
Creating this net involved: - a 5-step training process from scratch - greedy permuting L1 weights with #4620 - leb128 compression with official-stockfish/nnue-pytorch#251 - greedy 2- and 3- cycle permuting with #4640 The 5 training steps were: 1. 400 epochs, lambda 1.0, lr 9.75e-4 UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9.binpack (178G) nodes5000pv2_UHO.binpack data_pv-2_diff-100_nodes-5000.binpack wrongIsRight_nodes5000pv2.binpack multinet_pv-2_diff-100_nodes-5000.binpack dfrc_n5000.binpack large_gensfen_multipvdiff_100_d9.binpack ep399 chosen as start model for step2 2. 800 epochs, end-lambda 0.75, skip 16 LeelaFarseer-T78juntoaugT79marT80dec.binpack (141G) T60T70wIsRightFarseerT60T74T75T76.binpack test78-junjulaug2022-16tb7p.no-db.min.binpack test79-mar2022-16tb7p.no-db.min.binpack test80-dec2022-16tb7p.no-db.min.binpack ep559 chosen as start model for step3 3. 800 epochs, end-lambda 0.725, skip 20 leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr.binpack (223G) leela96-filt-v2.min.binpack dfrc99-16tb7p-eval-filt-v2.min.binpack test80-dec2022-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-jan2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-feb2023-16tb7p-filter-v6-sk20.min-mar2023.binpack test80-mar2023-2tb7p-filter-v6.min.binpack test77-dec2021-16tb7p.no-db.min.binpack test78-janfeb2022-16tb7p.no-db.min.binpack test79-apr2022-16tb7p.no-db.min.binpack ep499 chosen as start model for step4 4. 800 epochs, end-lambda 0.7, skip 24 0dd1cebea57 dataset #4606 ep599 chosen as start model for step5 5. 800 epochs, end-lambda 0.7, skip 28 same dataset as step4 ep619 became nn-1b951f8b449d.nnue For the final step5 training: python3 easy_train.py \ --experiment-name L1-2048-S5-sameData-sk28-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9 \ --training-dataset /data/leela96-dfrc99-T60novdec-v2-T80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd-T80apr.binpack \ --early-fen-skipping 28 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-2048 \ --engine-test-branch linrock/Stockfish/L1-2048 \ --start-from-engine-test-net False \ --start-from-model /data/experiments/experiment_L1-2048-S4-0dd1cebea57-shuffled-S3-leela96-dfrc99-v2-T80dectofeb-sk20-mar-v6-T77decT78janfebT79apr-sk20-S2-LeelaFarseerT78T79T80-ep399-S1-UHOx2-wIsRight-multinet-dfrc-n5000-largeGensfen-d9/training/run_0/nn-epoch599.nnue --max_epoch 800 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --tui False \ --seed $RANDOM \ --gpus 0 SF training data components for the step1 dataset: https://drive.google.com/drive/folders/1yLCEmioC3Xx9KQr4T7uB6GnLm5icAYGU Leela training data for steps 2-5 can be found at: https://robotmoon.com/nnue-training-data/ Due to larger L1 size and slower inference, the speed penalty loses elo at STC. Measurements from 100 bench runs at depth 13 with x86-64-modern on Intel Core i5-1038NG7 2.00GHz: sf_base = 1240730 +/- 3443 (95%) sf_test = 1153341 +/- 2832 (95%) diff = -87388 +/- 1616 (95%) speedup = -7.04330% +/- 0.130% (95%) Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch619.nnue : 21.1 +/- 3.2 Failed STC: https://tests.stockfishchess.org/tests/view/6498ee93dc7002ce609cf979 LLR: -2.95 (-2.94,2.94) <0.00,2.00> Total: 11680 W: 3058 L: 3299 D: 5323 Ptnml(0-2): 44, 1422, 3149, 1181, 44 LTC: https://tests.stockfishchess.org/tests/view/649b32f5dc7002ce609d20cf Elo: 0.68 ± 1.5 (95%) LOS: 80.5% Total: 40000 W: 10887 L: 10809 D: 18304 Ptnml(0-2): 36, 3938, 11958, 4048, 20 nElo: 1.50 ± 3.4 (95%) PairsRatio: 1.02 Passed VLTC 180+1.8: https://tests.stockfishchess.org/tests/view/64992b43dc7002ce609cfd20 LLR: 3.06 (-2.94,2.94) <0.00,2.00> Total: 38086 W: 10612 L: 10338 D: 17136 Ptnml(0-2): 9, 3316, 12115, 3598, 5 Passed VLTC SMP 60+0.6 th 8: https://tests.stockfishchess.org/tests/view/649a21fedc7002ce609d0c7d LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 38936 W: 11091 L: 10820 D: 17025 Ptnml(0-2): 1, 2948, 13305, 3207, 7 closes #4646 Bench: 2505168 master stockfish-dev-20230706-e699fee5 @linrock @vondele linrock authored
Zerbinati
added a commit
to Zerbinati/SugaR-XPrO
that referenced
this pull request
Jul 9, 2023
This was a later epoch from the same experiment that led to the previous master net. After training, it was prepared the same way: 1. greedy permuting L1 weights with #4620 2. leb128 compression with official-stockfish/nnue-pytorch#251 3. greedy 2- and 3- cycle permuting with #4640 Local elo at 25k nodes per move (vs. L1-1536 nn-fdc1d0fe6455.nnue): nn-epoch739.nnue : 20.2 +/- 1.7 Passed STC: https://tests.stockfishchess.org/tests/view/64a050b33ee09aa549c4e4c8 LLR: 2.95 (-2.94,2.94) <0.00,2.00> Total: 195552 W: 49977 L: 49430 D: 96145 Ptnml(0-2): 556, 22775, 50607, 23242, 596 Passed LTC: https://tests.stockfishchess.org/tests/view/64a127bd3ee09aa549c4f60c LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 235452 W: 60327 L: 59609 D: 115516 Ptnml(0-2): 119, 25173, 66426, 25887, 121 closes #4666 bench 2427629 master stockfish-dev-20230706-e699fee5 @linrock @vondele linrock authored
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR makes small alterations to the .nnue storage format and adds support for LEB128 compression of weights (right now only for the feature transformer). An additional
--ft_compression
CLI parameter may be used when serializing a network to .nnue. Note that it's also valid to serialize from .nnue to .nnue, which can be used for compressing already existing networks. Example:Now, every saved tensor may have a header, right now only present if the tensor is compressed. This header specifies the compression method used, and may contain other information useful for decoding. Since this header is optional to maintain backwards compatibility it is marked by a relatively long magic string so that the chance of a collision with an otherwise valid old network is minimal.
For LEB128 compression the header consists of a string "COMPRESSED_LEB128" encoded in utf-8, followed by little-endian int32 equal to the number of bytes taken by the compressed tensor data. Stockfish can of course choose to support only one variant, since we have full control over how network are made and we don't require support for networks other than the default one.
Sibling PR on Stockfish side. official-stockfish/Stockfish#4617
Thanks to @MaximMolchanov for suggesting using numba and providing fast leb128 encode/decode functions.