NEW DEVLOG!!! I just came back from an event about start ups and coding, placed third :)
I finally fixed the way memory works, now it will store all the data the Wikimedia API returns, with the corresponding analysis and predictions, so that the new analysis can be based on the latest one, but also on the last couple entries, for better accuracy.
Both API responses and memory are saved to data.json and memory.json, if deleted, data will be recreated next time the function is called, if memory is deleted, well, the analysis won’t take count of the old ones. And it will be recreated too.
The LLM provider i’m relying on is pretty unstable, one day they allow you to use GPT, one day only llama and so on, that’s not really nice.
That’s kind of a big problem since i was using GPT-oss which is not available anymore (for now). I coded this part testing it offline, as it is just a better version of the one in my first devlog, so there was nothing API or AI related to test.
I’ll use a local LLM soon, even though i don’t have a dedicated machine to run it on. I may as well create some categories directly in my code and use those for the analysis, but it won’t be very accurate and it will be way worse than it is now.