DeepPly banner

DeepPly

17 devlogs
81h 21m 32s

A chess improvement platform that helps you find the positional mistakes in your games. FOR TESTING USE CREDENTIALS: Username; test, Password: abcabc

This project uses AI

Copilot for code completion in vscode and Codex for the react + vite frontend generation (generated a sort of template frontend where I made my own styling and designing changes, main part of the project is the backend anyways)

Demo Repository

Loading README...

Darsh Jindal

Shipped this project!

Hours: 81.36
Cookies: šŸŖ 1652
Multiplier: 20.31 cookies/hr

This is one of the biggest projects I’ve made. I got rushed at the end otherwise I would spend some more time on the frontend but whoa that was crazy. There are still some issues so I will cover those in the background after flavortown ends. Please give some feedback on the idea!

Darsh Jindal

Final Devloggg!!!!

Crazy session, had to deploy everything, but ran into SO MANY ERRORS agghhhh. But I pulled through and now we have a fully functioning project hosted online on a custom domain (didnt get .com though). I had to host 2 servers which is probably why it took so much time.

Attachment
0
Darsh Jindal

Frontend Almost complete

Absolutely humongous session today, but I got the frontend complete (mostly, more on that later). Made 5 pages, a Home page, Dashboard Page, Login page, Analysis Page, and a Profile page. It looks pretty good as of now (I used a bit of AI since im lagging behind and I’ve got my exams in 3 days but I changed the styles to suit my needs). The lichess oauth thing connecting with the frontend is bugging me so thats all i need to do before I can finally deploy this🄳. The tactical part unfortunately i will do later after flavortown cuz ive got no time left.

Attachment
0
Darsh Jindal

Weight training

Completed the weight training pipeline for the positional analysis part. ran it overnight on my raspberry pi on lichess games from jan 2017.

Attachment
0
Darsh Jindal

GPT Pipeline Complete

Got the full end to end pipeline done, except 2 things, the actual prompt (which prob needs some time to be made accurately) and the tactical analysis (still debating whether to go CNN or rule based)

Attachment
0
Darsh Jindal

Conditioning data

The first step of the explanation generation pipeline was converting the absolute MASSIVE amount of data I had logged into more readable and less hallucinatable data that the LLM can process, and reduce the number of tokens that I give it cuz tbh I dont like becoming bankrupt 😁 (its still somehow 4500 tokens just for the input). So I trimmed a lot of the data from the full analysis (which is still stored) and made new classes to support the smaller dataset that I will pass onto gpt. And then ofc I ran into problems so the rest of the time was dealing with pydantic

Attachment
0
Darsh Jindal

Analysis Pipeline Complete

HUGE session, but I managed to complete the entire analysis pipeline up to the last explanation generation phase (except the tactic detection cnn). Most of the time however was spent in debugging this fricking code. First, the stockfish server part was bugging a lot, then the celery part was somehow causing problems with the settings. Then once I managed to sort that out, pydantic gave me issues with float types and numpy arrays. But I mean it was satisfying to knock off each bug one by one. Now I’m going to do the explanation generation part then revisit the tactics part because im still not sure whether to use rule based tactic detection or a ML model.

Attachment
0
Darsh Jindal

Scorer Addition

Man I couldn’t stop feeling like I was missing smth from the scorer vector then I realized that I was going to add open file and diagonal detection, so this short mini session was for that. Fixed some bugs as well

Attachment
Attachment
0
Darsh Jindal

Scorer Logic Complete

Im so fucking relieved this part took me forever omg but I managed to select and implement all the data per position I want. I decided to take 2 vectors per position, 1 vector per side. I think I will first find a way to train the weights of the positional analysis, then turn my attention to the tactical detection, though ive already got a full pipeline format ready

Attachment
0
Darsh Jindal

Concept Detection

Idk why I said I would do tactical detection in the last devlog cuz this time I was obsessed with improving the positional pipeline and I realized that in order to give valid feedback, I need to know what the user was intending to do. Now the way I was planning to do this was to map each move to the concept it shows by using some rules and labels and then compare it to the concept that the engine lines show and then use that in the explanation. I even coded the concept analysis fully before I realized that wait a minute, this is exactly what my position vector scorers were supposed to calculate. I’m doing the same fricking thing but calling it different things. So I was back to the drawing board, when I thought, why not ship this without intent data but make a tool for users to see what they think certain moves do to collect data to train a model to map moves to intent labels for specific ratings! And I can integrate Maia’s neural networks in leela chess 0 to get human like moves at specific ratings to compare the user’s played move with. But this is like a later thing to integrate.

Tactics

ML tactic detection is probably the way to go, but again I need loads of data, so I will implement rule based detection for now.

Smaller edits

I changed code’s readability, by adding static type hinting, adding fixed dataclasses for clusters, evaluations, positions, etc. Future me will thank me.

Attachment
0
Darsh Jindal

Positional Analysis

So this was a pretty long session. I got the positional analysis working by using different methods to group engine moves into similar plans and using the plans to compare and evaluate the correctness of the user’s move with what the position required. Took me long enough since I went back and forth between a lot of different ways of evaluating and sometimes even required changes in how I handle the score, which I finally decided to add as another value in the evaluations object. Next, I’m going to look at a way to detect tactical mistakes and ideas. Btw this isnt the actual positional analysis, but a way to compare the position vector part that I skipped over to the engine position vectors, which helps me decide if the user is following engine plans.

Note: I dont have a pic of any output rn so im just going to attach a pic of a method I used to compare the engine position vector with the user position vector

Attachment
0
Darsh Jindal

Not much done

This was a bit of a bad session. I couldn’t rly understand how to develop the positional analysis side except like a broad idea, so I think I’m going to work on the other important parts like the frontend and the rest of the backend using dummy data for now so that I can come back and spend some time on this. The pic is the output schema I have decided

Attachment
0
Darsh Jindal

Stockfish Worker

So I worked on the stockfish fen analyzer today. Basically what it does its a fastapi server that waits for batch fen analysis and then it divides the fens across multiple workers (depending on the number of cpus on the device). Each worker processes the fens and updates the result on a redis server. The client (which is my main server analysis pipeline) polls for the status of completion, and when its completed, it will return all of the fens. However, I have a Lichess eval db alr, and it caches a lot of the positions. So when an alr analyzed game is requested, my fastapi server just fetches the eval from the db and immediately returns all of the cached evals. Only the evals not in the db are sent to stockfish. After stockfish analyzes the positions, it adds it to the db, but it analyzes at only depth 14 (for speed). So it adds the position to a redis queue, which my raspberry pi 5 looks up continuously, reanalyzes the position at a higher depth, and updates the db. Next time the same position is entered, its alr cached. Now im going to work on the full positional pipeline

Attachment
0
Darsh Jindal

Lichess Db caching

I had an idea to download the lichess evaluations db so that I wont need to run stockfish on each position, saving me a lot of time. However the db is pretty big (18gb compressed, maybe even 100+gb uncompressed), but I didnt realize that before writing a custom encoding and converting to sql script that took me 3.5 HOURS TO COMPLETE 😭. I just realized that even if each position is encoded to 256 bytes, sql will take around 90gb cuz there are 343 million positions. Damn. Anyways, I don’t have enough space on my microsd card for that so when I deploy this project on Digital Ocean or smth, I’ll just convert the .zst file to sql there. So now im going to focus on the analysis part and maybe complete the stockfish worker script first before moving on to the positional analysis

Attachment
0
Darsh Jindal

Analysis Pipeline

Started working on the actually analysing games part. I decided to split the stockfish and django servers so that they dont compete for cpu cores or threads. Rn iv done the django side of the fetching stockfish evals. Took some time cuz i was pretty new to celery. Next Im going to work on the stockfish side of it.

Lichess url import

Added a small feature to import games by adding a url instead of pulling all games.

Attachment
0
Darsh Jindal

Added Chess.com import

yeah so basically what the title says, I used chess.com’s public api to get add games of a user from the past x months given their username, and stored the important info into my own custom db model. Had to use the python-chess library to convert the pgn string to actual moves to store. Now that im done with this im going to do the actual analyzing part

Attachment
0
Darsh Jindal

Lichess Import

Added importing games from last 1 year into db by calling the lichess api and streaming the games in ndjson after getting the token. Didn’t import the pgn cuz json gave me the moves and thats enough. Next, either imma implement chess.com import (without oauth) or move on to analyzing and add chess.com oauth import later.

Attachment
0
Darsh Jindal

First Devlog

Ik its the first devlog of this project but I got many things done

Setup Django and User Auth

I got the user authentication part done with logging in and accessing everything. I decided to do jwt cuz its pretty secure and allows me to keep a pretty nice split of the frontend and backend, so that I don’t accidently allow the client side parts to access all of the secrets.

Lichess OAuth

This was pretty long cuz I had to read so much fricking documentation to even understand what was going on. Ik the public api is free and doesn’t need a login but oauth gives a nice feel to my project and it allows me to get user games at higher rates, wait, no, no it doesnt, chess.com does that, ok damn, I spent 3 hours for nothing. At least it looks good lmao.

Attachment
0