Skip to content

New Neural Nets, Optional Wider Analysis, Bugfixes

Compare
Choose a tag to compare
@lightvector lightvector released this 09 May 21:53
· 1775 commits to master since this release

This release is outdated, see releases page for more recent versions with important bugfixes. But also, if you're upgrading from a version before v1.4.0, see below for a variety of notes about the changes since 1.3.x! Also, as of early May 2020 the latest and strongest neural nets are still the ones here.

If you're a new user, don't forget to check out this section for getting started and basic usage!

This time, we have both new nets and new code!

New Neural Nets!

These are all the new strongest neural net of each size so far. Interestingly, the 40 block net seems to have pulled well ahead in how much it improved this time (about 70 Elo), while the 30 block did not make nearly the same improvement (about 25 Elo). Perhaps the 40 block net got lucky in its gradient descent and the neural net stumbled into learning something useful that the other net didn't. The 20 block net gained maybe around 15 Elo. All of these differences have an uncertainty window of +/- 15 Elo or so (95% confidence), and of course these differences are based on a large varied pool of internal KataGo nets, so might vary a bit against very different opponents.

The strongest net to use for weak to midrange hardware is likely going to be the 20 block net. With the large gain of the 40 block net though, the 40 block net might be stronger for strong hardware and/or long time controls.

KataGo's main run is close to wrapping up so these will likely be the last "semi-zero" neural nets released, that is, nets trained purely with no outside data. A few more nets will be released after these as KataGo finishes the end of this run with some experimentation with ways of using outside data.

  • g170-b30c320x2-s3530176512-d968463914 - The latest and final semi-zero 30-block net.
  • g170-b40c256x2-s3708042240-d967973220 - The latest and final semi-zero 40-block net.
  • g170e-b20c256x2-s4384473088-d968438914 - The latest and final semi-zero 20-block net (continuing extended training on games from the bigger nets).

New Feature and Changes this Release:

  • New experimental config option to help analysis: analysisWideRootNoise

    • Set to a small value like 0.04 to make KataGo broaden its search at the root during analysis (such as in Sabaki or Lizzie) and evaluate more moves, to make it easier to see KataGo's initial impressions of more moves, although at the cost of needing some more time before the top moves get searched as deeply.
    • Or set to a large value like 1 to make KataGo search and evaluate almost every move on the board a bunch.
    • You can also change this value at runtime in the GTP console via kata-set-param analysisWideRootNoise VALUE.
    • Only affects analysis, does NOT affect play (e.g. genmove).
  • KataGo will now tolerate model files that have been renamed to just ".gz" rather than one of ".bin.gz" or ".txt.gz".

  • Implemented cputime and gomill-cpu_time GTP commands, documented here, which should enable some automated match/tournament scripts to now compare and report the time taken by the bot if you are running KataGo in tests against other bots or other versions of itself.

  • EDIT (2015-05-12) (accidentally omitted in the initial release notes): Reworked the way KataGo's configures playoutDoublingAdvantage and dynamicPlayoutDoublingAdvantage, and slightly improved how it computes the initial lead in handicap games.

    • You can now simply comment out all playoutDoublingAdvantage-related values and KataGo will choose a sensible default, which is to play evenly in even games, and to play aggressively when giving handicap stones, and safely when receiving handicap stones.
    • The default config has been updated accordingly, and you can also read the new config to see how to configure these values going forward if you prefer a non-default behavior.

Bugfixes

  • Added workaround logic to correctly handle rules-based score adjustment in handicap games (e.g. +1 point per handicap stone in Chinese rules) when handicap is placed in a non-GTP-compliant way, via consecutive black moves and white passes. This behavior can still be disabled via assumeMultipleStartingBlackMovesAreHandicap = false.

  • Fixed bug where adjusting the system clock or time zone might interfere with the amount of time KataGo searches for, on some systems.

For Devs

  • Added a script to better support for synchronous training, documented here.

  • Added various new options and flags for the JSON analysis engine, including root info, raw policy, and the ability to override almost any search-related config parameter at runtime. Analysis engine now also defaults to finishing all tasks before quitting when stdin is closed instead of dropping them, although a command line flag can override this.

  • Reorganized the selfplay-related configs into a subdirectory within cpp/configs, along with some internal changes and cleanups to selfplay config parameters and logic. The example configs have been updated, you can diff them to see the relevant changes.

  • num_games_total, which used to behave buggily and unreliably, has now been entirely removed from the selfplay config file, and instead a command line argument so as to be much more easily changeable by a script:
    -max-games-total .

Enjoy!