Stefan Pohl Computer Chess

private website for chessengine-tests


Latest Website-News (2017/06/25): Stockfish-Testruns: All new! A new notebook (i7-6700HQ Skylake (2.6 GHz) CPU) (faster (+30% more nodes/s) and more memory), new opening set (my new 2017 SALC 500-set with shorter lines (only 7.5 moves (15 plies) deep)), only 5 opponents (1000 games each) (instead of 7), but much longer thinking-time: 180“+1“ (instead of 70“+700ms) - average thinking-time displayed by LittleBlitzerGUI raises from 1.7'' to 3.8'' per move. And bigger hash (512 MB instead of 128 MB). I am doing this, because I got a lot of requests for testing with longer thinking-time, because 70''+700ms is very close to the Stockfish Framework LTC (60''+600ms).
Before the testing of new Stockfish-, asmFish- and Brainfish-versions can start, the 5 opponents (Komodo 11.01, Houdini 5, Shredder 13, Fizbo 1.9, Andscacs 0.91) and Stockfish have to play against each other in order to get a gamebase for the ORDO-calculation. That will take some time (around one month, I think). So, please have a little patience (and enjoy summer...) Thanx.
The long thinking-time tournament is of course still running and I will update these section on my website from time to time.

 

 

The complete new version of SALC (2017) opening books/sets (white and black castling to opposite sides) for more spectacular computerchess with less draws and more king-attacks is now online. Download the complete SALC-package in the Downloads & Links-section. Or just right here

 

Stay tuned.


Stockfish testing

 

Playing conditions:

 

Hardware: i7-2630QM 2.0GHz Notebook, Windows 10 64bit, 4GB RAM

Fritzmark: singlecore: 3.97 / 1905 (all engines running on one core, only), average meganodes/s displayed by LittleBlitzerGUI: Houdini: 2.0 mn/s, Stockfish: 1.7 mn/s

Hash: 128MB per engine

GUI: LittleBlitzerGUI (draw at 120 moves, resign at 450cp (for 4 moves))

Tablebases: None

Openings: 10moves_SALC_500.epd (download the file at the "Download & Links"-section)

Ponder, Large Memory Pages & learning: Off

Thinking time: 70''+700ms per game/engine (average game-duration: 3.5 minutes)(standardized to the hardware-speed and the thinking time of the excellent FGRL Bullet-ratinglist). One 7000 games-testrun takes about 6 days (running on only 3 of 4 cores). The version-numbers of the Stockfish-development engines are the release-date, written backwards (year,month,day))(example: 141028 = October, 28, 2014), downloaded at chess.ultimaiq.net I always use the latest version of one day, if more than one version per day is released. And I use the version "for modern computers". (At the moment, the compiles for modern windows machines on abrok.eu are around 8% slower, so I dont use them anymore)

 

Each Stockfish-version plays 1000 games against Komodo 11.01, Houdini 5, Shredder 13, Fizbo 1.9, Gull 3, Fire 4, and Critter 1.6a.

 

Latest update: 2017/06/08: asmFish 170522

 

Download the individual statistics here

 

     Program                   Elo    +    -   Games   Score   Av.Op.  Draws

   1 BrainFish 170410 x64    : 3461    7    7  7000    80.0 %   3202   31.6 %
   2 asmFish 170522 x64      : 3448    7    7  7000    78.6 %   3203   31.9 % (new)
   3 asmFish 170502 x64      : 3440    7    7  7000    78.2 %   3202   33.8 %
   4 Stockfish 170526 x64    : 3420    7    7  7000    76.0 %   3203   35.1 %
   5 Stockfish 170503 x64    : 3405    7    7  7000    74.7 %   3202   36.3 %
   6 Stockfish 8 161101      : 3390    5    5 11000    73.7 %   3197   36.4 %
   7 Houdini 5 x64           : 3363    4    4 14000    58.6 %   3290   43.9 %
   8 Komodo 11.01 x64        : 3354    5    5  9000    61.9 %   3258   38.6 %
   9 Komodo 10.4 x64         : 3343    5    5 10000    58.1 %   3276   39.4 %
  10 Houdini 4 x64           : 3207    6    6  7000    56.7 %   3160   38.0 %
  11 Shredder 13 x64         : 3185    4    4 15000    36.3 %   3296   35.9 %
  12 Fizbo 1.9 x64           : 3168    5    5 13000    31.5 %   3320   29.8 %
  13 Gull 3 x64              : 3126    4    4 16000    30.8 %   3288   34.4 %
  14 Fire 4 x64              : 3117    4    4 16000    29.7 %   3288   35.0 %
  15 Critter 1.6a x64        : 3110    4    4 16000    28.9 %   3289   31.5 %
  16 Mars 3.41 x64           : 3097    7    7  6000    40.4 %   3174   41.1 %
  17 Equinox 3.3 x64         : 3094    6    6  8000    36.9 %   3199   39.8 %

Below you find a diagram of the progress of Stockfish in my tests since the end of 2016

And below that diagram, the older diagrams.

 

You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose "save image"...

The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-"dots" in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-"dots" are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished).


Sie sind Besucher Nr.