Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update smallnet to nn-baff1ede1f90.nnue with wider eval range #4919

Closed

Conversation

linrock
Copy link
Contributor

@linrock linrock commented Dec 13, 2023

Created by training an L1-128 net from scratch with a wider range of evals in the training data and wld-fen-skipping disabled during training. The differences in this training data compared to the first dual nnue PR are:

  • removal of all positions with 3 pieces
  • when piece count >= 16, keep positions with simple eval above 750
  • when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data piece count distribution, which was previously heavily skewed towards positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was widened to cover more positions previously evaluated by the big net and simple eval.

experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE #4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

Bench: 1438336

@linrock
Copy link
Contributor Author

linrock commented Dec 13, 2023

this is meant to be a placeholder draft PR for now while the other PR is undergoing changes. however i can't seem to change this into a draft PR.

@Disservin Disservin marked this pull request as draft December 13, 2023 18:18
linrock added a commit to linrock/Stockfish that referenced this pull request Dec 28, 2023
Created by retraining the master big net `nn-0000000000a0.nnue` on the same
dataset with the ranger21 optimizer and more WDL skipping at training time.

More WDL skipping is meant to increase lambda accuracy and train on fewer
misevaluated positions where position scores are unlikely to correlate
with game outcomes. Inspired by:
- repeated reports in discord #events-discuss about SF misplaying due to wrong endgame
  evals, possibly due to Leela's endgame weaknesses reflected in training data
- an attempt to reduce the skewed dataset piece count distribution where there are much
  more positions with less than 16 pieces, since the target piece count distribution in
  the trainer is symmetric around 16

The faster convergence seen with ranger21 is meant to:
- prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima
- research faster potential trainings by shortening each run

```yaml
experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip
training-dataset: /data/S6-514G.binpack
early-fen-skipping: 28

start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip

num-epochs: 1200
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Implementations based off of Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
- Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336
- Experiment 351 - more WDL skipping

The version of the ranger21 optimizer used is:
https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py

The dataset is the exact same as in:
official-stockfish#4782

Local elo at 25k nodes per move:
nn-epoch619.nnue : 6.2 +/- 4.2

Passed STC:
https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 46528 W: 11985 L: 11650 D: 22893
Ptnml(0-2): 154, 5489, 11688, 5734, 199

Passed LTC:
https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 265326 W: 66378 L: 65574 D: 133374
Ptnml(0-2): 153, 30175, 71254, 30877, 204

This was additionally tested with the latest DualNNUE and passed SPRTs:

Passed STC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 296128 W: 76273 L: 75554 D: 144301
Ptnml(0-2): 1223, 35768, 73617, 35979, 1477

Passed LTC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 75618 W: 19085 L: 18680 D: 37853
Ptnml(0-2): 45, 8420, 20497, 8779, 68

bench 1049162
Disservin pushed a commit that referenced this pull request Dec 30, 2023
Created by retraining the master big net `nn-0000000000a0.nnue` on the same
dataset with the ranger21 optimizer and more WDL skipping at training time.

More WDL skipping is meant to increase lambda accuracy and train on fewer
misevaluated positions where position scores are unlikely to correlate
with game outcomes. Inspired by:
- repeated reports in discord #events-discuss about SF misplaying due to wrong endgame
  evals, possibly due to Leela's endgame weaknesses reflected in training data
- an attempt to reduce the skewed dataset piece count distribution where there
  are much more positions with less than 16 pieces, since the target piece count
  distribution in the trainer is symmetric around 16

The faster convergence seen with ranger21 is meant to:
- prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima
- research faster potential trainings by shortening each run

```yaml
experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip
training-dataset: /data/S6-514G.binpack
early-fen-skipping: 28

start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip

num-epochs: 1200
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Implementations based off of Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
- Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336
- Experiment 351 - more WDL skipping

The version of the ranger21 optimizer used is:
https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py

The dataset is the exact same as in:
#4782

Local elo at 25k nodes per move:
nn-epoch619.nnue : 6.2 +/- 4.2

Passed STC:
https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 46528 W: 11985 L: 11650 D: 22893
Ptnml(0-2): 154, 5489, 11688, 5734, 199

Passed LTC:
https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 265326 W: 66378 L: 65574 D: 133374
Ptnml(0-2): 153, 30175, 71254, 30877, 204

This was additionally tested with the latest DualNNUE and passed SPRTs:

Passed STC vs. #4919
https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 296128 W: 76273 L: 75554 D: 144301
Ptnml(0-2): 1223, 35768, 73617, 35979, 1477

Passed LTC vs. #4919
https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 75618 W: 19085 L: 18680 D: 37853
Ptnml(0-2): 45, 8420, 20497, 8779, 68

closes #4942

Bench: 1304666
linrock and others added 2 commits January 7, 2024 13:57
Credit goes to @mstembera for:
- writing the code enabling dual NNUE: official-stockfish#4898
- the idea of trying L1-128 trained exclusively on high simple eval positions

The L1-128 smallnet is:
- epoch 399 of a single-stage training from scratch
- trained only on positions from filtered data with high material difference
  - defined by abs(simple_eval) > 1000

```yaml
experiment-name: 128--S1-only-hse-v2

training-dataset:
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-1k.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-1k.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

  # T80 2022
  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-1k.binpack

  # T80 2023
  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-1k.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-1k.binpack

start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

Data filtered for high simple eval positions with:
https://github.com/linrock/nnue-data/blob/32d6a68/filter_high_simple_eval_plain.py
https://github.com/linrock/Stockfish/blob/61dbfe/src/tools/transform.cpp#L626-L655

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch399.nnue : -318.1 +/- 2.1

Passed STC:
https://tests.stockfishchess.org/tests/view/6574cb9d95ea6ba1fcd49e3b
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 62432 W: 15875 L: 15521 D: 31036
Ptnml(0-2): 177, 7331, 15872, 7633, 203

Passed LTC:
https://tests.stockfishchess.org/tests/view/6575da2d4d789acf40aaac6e
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64830 W: 16118 L: 15738 D: 32974
Ptnml(0-2): 43, 7129, 17697, 7497, 49

Bench: 1330050
Co-Authored-By: mstembera <[email protected]>
Created by training an L1-128 net from scratch with a wider range of evals
in the training data and wld-fen-skipping disabled during training. The
differences in this training data compared to the first dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data piece
count distribution, which was previously heavily skewed towards positions with
low piece counts.

Additionally, the simple eval range where the smallnet is used was widened to
cover more positions previously evaluated by the big net and simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE official-stockfish#4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

Bench: 1438336
@linrock linrock force-pushed the n2-lazy-1050-2550-pr branch from 595e24e to 14f4d95 Compare January 7, 2024 20:11
@linrock linrock changed the title Update smallnet to nn-baff1ede1f90.nnue with simple eval lazy threshold Update smallnet to nn-baff1ede1f90.nnue with wider eval range Jan 7, 2024
@Disservin Disservin added bench-change Changes the bench 🚀 gainer Gains elo labels Jan 7, 2024
@linrock linrock marked this pull request as ready for review January 7, 2024 20:21
@Disservin Disservin added the to be merged Will be merged shortly label Jan 7, 2024
@Disservin Disservin closed this in f09adaa Jan 7, 2024
Disservin pushed a commit to Disservin/Stockfish that referenced this pull request Jan 8, 2024
Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE official-stockfish#4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

closes official-stockfish#4919

Bench: 1438336
Joachim26 pushed a commit to Joachim26/StockfishNPS that referenced this pull request Jan 14, 2024
Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE official-stockfish#4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

closes official-stockfish#4919

Bench: 1438336
rn5f107s2 pushed a commit to rn5f107s2/Stockfish that referenced this pull request Jan 14, 2024
Created by retraining the master big net `nn-0000000000a0.nnue` on the same
dataset with the ranger21 optimizer and more WDL skipping at training time.

More WDL skipping is meant to increase lambda accuracy and train on fewer
misevaluated positions where position scores are unlikely to correlate
with game outcomes. Inspired by:
- repeated reports in discord #events-discuss about SF misplaying due to wrong endgame
  evals, possibly due to Leela's endgame weaknesses reflected in training data
- an attempt to reduce the skewed dataset piece count distribution where there
  are much more positions with less than 16 pieces, since the target piece count
  distribution in the trainer is symmetric around 16

The faster convergence seen with ranger21 is meant to:
- prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima
- research faster potential trainings by shortening each run

```yaml
experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip
training-dataset: /data/S6-514G.binpack
early-fen-skipping: 28

start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip

num-epochs: 1200
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Implementations based off of Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
- Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336
- Experiment 351 - more WDL skipping

The version of the ranger21 optimizer used is:
https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py

The dataset is the exact same as in:
official-stockfish#4782

Local elo at 25k nodes per move:
nn-epoch619.nnue : 6.2 +/- 4.2

Passed STC:
https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 46528 W: 11985 L: 11650 D: 22893
Ptnml(0-2): 154, 5489, 11688, 5734, 199

Passed LTC:
https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 265326 W: 66378 L: 65574 D: 133374
Ptnml(0-2): 153, 30175, 71254, 30877, 204

This was additionally tested with the latest DualNNUE and passed SPRTs:

Passed STC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 296128 W: 76273 L: 75554 D: 144301
Ptnml(0-2): 1223, 35768, 73617, 35979, 1477

Passed LTC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 75618 W: 19085 L: 18680 D: 37853
Ptnml(0-2): 45, 8420, 20497, 8779, 68

closes official-stockfish#4942

Bench: 1304666
rn5f107s2 pushed a commit to rn5f107s2/Stockfish that referenced this pull request Jan 14, 2024
Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE official-stockfish#4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

closes official-stockfish#4919

Bench: 1438336
windfishballad pushed a commit to windfishballad/Stockfish that referenced this pull request Jan 23, 2024
Created by retraining the master big net `nn-0000000000a0.nnue` on the same
dataset with the ranger21 optimizer and more WDL skipping at training time.

More WDL skipping is meant to increase lambda accuracy and train on fewer
misevaluated positions where position scores are unlikely to correlate
with game outcomes. Inspired by:
- repeated reports in discord #events-discuss about SF misplaying due to wrong endgame
  evals, possibly due to Leela's endgame weaknesses reflected in training data
- an attempt to reduce the skewed dataset piece count distribution where there
  are much more positions with less than 16 pieces, since the target piece count
  distribution in the trainer is symmetric around 16

The faster convergence seen with ranger21 is meant to:
- prune experiment ideas more quickly since fewer epochs are needed to reach elo maxima
- research faster potential trainings by shortening each run

```yaml
experiment-name: 2560-S7-Re-514G-ranger21-more-wdl-skip
training-dataset: /data/S6-514G.binpack
early-fen-skipping: 28

start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip

num-epochs: 1200
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Implementations based off of Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
- Experiment 336 - ranger21 https://github.com/Sopel97/nnue-pytorch/tree/experiment_336
- Experiment 351 - more WDL skipping

The version of the ranger21 optimizer used is:
https://github.com/lessw2020/Ranger21/blob/b507df6/ranger21/ranger21.py

The dataset is the exact same as in:
official-stockfish#4782

Local elo at 25k nodes per move:
nn-epoch619.nnue : 6.2 +/- 4.2

Passed STC:
https://tests.stockfishchess.org/tests/view/658a029779aa8af82b94fbe6
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 46528 W: 11985 L: 11650 D: 22893
Ptnml(0-2): 154, 5489, 11688, 5734, 199

Passed LTC:
https://tests.stockfishchess.org/tests/view/658a448979aa8af82b95010f
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 265326 W: 66378 L: 65574 D: 133374
Ptnml(0-2): 153, 30175, 71254, 30877, 204

This was additionally tested with the latest DualNNUE and passed SPRTs:

Passed STC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658bcd5c79aa8af82b951846
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 296128 W: 76273 L: 75554 D: 144301
Ptnml(0-2): 1223, 35768, 73617, 35979, 1477

Passed LTC vs. official-stockfish#4919
https://tests.stockfishchess.org/tests/view/658c988d79aa8af82b95240f
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 75618 W: 19085 L: 18680 D: 37853
Ptnml(0-2): 45, 8420, 20497, 8779, 68

closes official-stockfish#4942

Bench: 1304666
windfishballad pushed a commit to windfishballad/Stockfish that referenced this pull request Jan 23, 2024
Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
  - /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
  - /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

  - /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
  - /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

  - /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

  - /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
  - /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
official-stockfish/nnue-pytorch#259

FT weights permuted with 10k positions from fishpack32.binpack with:
official-stockfish/nnue-pytorch#254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE official-stockfish#4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924
Ptnml(0-2): 98, 16591, 39891, 17103, 120

closes official-stockfish#4919

Bench: 1438336
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bench-change Changes the bench 🚀 gainer Gains elo to be merged Will be merged shortly
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants