Skip to content

Final June 2020 Neural Nets, Minor Bugfixes

Compare
Choose a tag to compare
@lightvector lightvector released this 21 Jun 17:08
· 1732 commits to master since this release

If you're a new user, don't forget to check out this section for getting started and basic usage!

KataGo's third major run is complete! Almost certainly we could keep going further and continue improving with no end in sight, but for now due to the cost of continuing the run, this seems like a good point to stop. In the future, there is a chance that KataGo will launch a crowdsourced community-distributed run to continue further with more research and improvement. But regardless, I hope you enjoy the run so far and these final networks.

This is both a release of the final networks, as well as an update to KataGo's code for a few minor bugfixes.

New/Final Neural Networks (for now!)

These are the final neural networks for the June 2020 ("g170") run, obtained after training for close to 2 weeks at reduced learning rates. This resulted in a huge strength boost, somewhere from 200 to 250 Elo for both the 30 and 40 block networks, and around 100 Elo for the 20 block network.

These gains were measured by play in a pool of older KataGo networks - it's unknown what proportion of these gains transfer to opponents other than just KataGo, gains due to learning rate drops (presumably just reducing noise and gaining in overall accuracy) might be qualitatively different than gains over time from learning new shapes and moves. But hopefully much of it does.

  • g170-b30c320x2-s4824661760-d1229536699 ("g170 30 block d1229M") - Final 30 block network!
  • g170-b40c256x2-s5095420928-d1229425124 ("g170 40 block d1229M") - Final 40 block network!
  • g170e-b20c256x2-s5303129600-d1228401921 ("g170e 20 block d1228M") - Final 20 block network!

Additionally, posted here is an extremely fat and heavy neural net, 40 blocks with 384 channels instead of 256 channels, which has never been tested (scroll to the bottom, download and unzip the file to find the .bin.gz file).

It is probably quite slow to run and likely weaker given equal compute time. But it would be very interesting to try and see how its per-playout strength compares, as well as its one-playout strength (pure raw policy) in case anyone wants to test it out!

Which Network Should I Use?

  • For weaker or mid-range GPUs, try the final 20-block network.
  • For top-tier GPUs and/or for the highest-quality analysis if you're going to use many thousands and thousands of playouts and long thinking times, try the final 40-block network, which is more costly to run but should be the strongest and best.
  • If you care a lot about theoretical purity - no outside data, bot learns strictly on its own - use the 20 or 40 block nets from this release, which are pure in this way and still much stronger than Leela Zero, but also not quite as strong as these final nets here.
  • If you want some nets that are much faster to run, and each with their own interesting style of play due to their unique stages of learning, try any of the "b10c128" or "b15c192" Extended Training Nets here which are 10 block and 15 block networks from earlier in the run that are much weaker but still pro-level-and-beyond.
  • And if you want to see how a super ultra large/slow network performs that nobody has tested until now, try the fat 40-block 384 channel network mentioned a little up above.

Bugfixes this Release

  • Fixed a bug in analysis_example.cfg where nnMaxBatchSize was duplicated, and added a safeguard in KataGo to fail if fed any config with duplicate parameters in the future, instead of silently using one of them and ignoring the other.
    • If you have a config with a buggy duplicate parameter, you may find KataGo failing when switching to this release - please just remove the duplicate parameter and set it to what it should be if the two values for that parameter were inconsistent/conflicting.
  • Split up one of the OpenCL kernels into a few pieces to make compiling it faster, and also made a minor tweak, so that on most systems the OpenCL tuner will take a little less long.
  • katago match will now size the neural net according to the largest board size involved in the match by default, instead of always 19. This should make it faster to run test games on small boards.