NNUE Works!
After a lot of hours spent training and making a neural network, NNUE to be specific, and writing a basic trainer and implementing accumulators and other NNUE features, i finally can say that my engine is now NNUE based!
The NNUE was trained on 30 Million positions, which is very light compared to other toy engines which usually have networks trained on 5+ Billion positions or more which makes them really strong, but sadly i don’t have the computational capacity to train at that level, i trained the network on my own engine’s self-play at depths 6-10, which was insanely slow!, but i can’t complain now that i have a working, and actually pretty good NNUE!
Next step is implementing output buckets and having multiple layers instead of just one hidden layer which was kind of a bottle-neck, and ultimately training the NNUE on +1 Billion games which should make us capable of reaching 2500 ELO!
Bellow is a quiet game won against a friend who’s quite good at chess! (shotout @BlueCheckmate), i expected the engine to be more aggressive (or random?) after the addition of the not so well trained NNUE but it did quite good against him!